




























































































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Engineering from Anna University, Chennai, in the year 2008. ... 9. 2/\Art;ficial Neural Network: An Introduction ... Economics/finance.
Typology: Study notes
1 / 356
This page cannot be seen from the preview
Don't miss anything!
About^ the^ Authors Dr. S. N. Sivanandam completed^ his^ BE^ (Electrical^ and^ Electronics^ Engineering) in 1964
from^ Government College^ of^ Technology,^ Coimbatore,^ and^ MSc^ (Engineering)
in^ Power^ System^ in 1966^ from^ PSG^ College^ of
years.^ The^ toral^ number of^ undergraduate and postgraduate
and Engineering^ and^ Electrical^ and Elecrionics Engineering
is^ around^ 600.^ He^ has^ worked^ as^ Professor^ and Head^ Computer^ Science and Engineering Department,
PSG^ College^ of^ Technology, Coimbatore.^ He^
has been identified^ as^ an outstanding person in the field
of^ Compurer^ Science^ and Engineering in^ MARQUlS 'Who's Who',^ October^2003 issue,^ USA.^ He has also been identified
as^ an outstanding person in the field^ of Computational^ Science^ and Engineering in 'Who's Who', December
2005 issue,^ Saxe~Coburg^ Publications,
WHO's^ WHO^ Registry^ ofNatiori;u^ Business
30 PhD^ research^ works and at^ present^9 PhD research scholars are working under him.^ The^ rotal
number^ of^ technical publications in International/National Journals/Conferences^ is^ around^ 700.^ He^ has^ also
received^ Certificate^ of^ Merit^ 2005-2006^ for his paper from The^ Institution^ ofEngineers (India). He has chaired 7 International Conferences and
30 National Conferences. He^ is^ a member^ of^ various professional bodies like IE (India),
ISTE,^ CSI,^ ACS^ and^ SSI.^ He^ is^ a technical advisor for various reputed industries and engineering
institutions.^ His research^ areas^ include Modeling and Simulation, Neural Networks, Fuzzy^ Systems^ and Genetic Algorithm,
Pattern^ Recognition, Multidimensional system analysis, Linear and Nonlinear control system,
Signal^ and Image^ prSJ_~-~sing,^ Control^ System,^
Power system,^ Numerical^ methods,^ Parallel^ Computing,
Data Mining and Database Security.Dr. S. N. Deepa has completed her BE Degree from Government^ College^ of^ Technology, Coimbarore, 1999,^ ME^ Degree from^ PSG^ College^ of^ Technology, Coimbatore,
2004,^ and^ Ph.D.^ degree in Electrical Engineering from Anna University,^ Chennai,^ in the year
2008.^ She^ is^ currently Assistant^ Professor,^ Dept.
of Electrical and Electronics Engineering, Anna University ofTechnology, Coimbatore. She
in her^ BE.^ Degree^ Program.^ She^ has received G.D. Memorial Award in the year 1997 and Best
Outgoing Student^ Award from^ PSG^ College^ of^ Technology,
2004.^ Her^ ME^ Thesis won National Award from the Indian Society^ of^ Technical Education and L&T,^ 2004.
She^ has^ published 7 books and 32 papers in International and National Journals.^ Her^ research^ areas^ include Neural Network,
Fuzzy^ LOgic,^ Genetic Algorithm, Linear and Nonlinear^ Control^ Systems,^ Digital^ Control,
Adaptive and^ Optimal^ Control.
r,.^ ~I
Contents Preface
v
viii
1.1.2^ Advantages^ ofNeural Networks^ ...........................................................
Application^ Scope^ of^ Neural Networks^ .............................................................
Genetic Algorithm^ ...................................................................................
1.5.1^ Neuro Fuzzy Hybrid^ Systems^ .................
1.5.2^ Neuro Genetic Hybrid^ Systems^ ............................................................
1.5.3^ Fuzzy^ Genetic Hybrid^ Systems^
1.6^ Soft^ Computing^
1.7^ Summary···'~·····················^
2/\Art;ficial^ Neural^ Network:^ An^ Introduction J^ .--^ Learning Objectives
2.1^ Fundamental^ Concept^ ...........^.^
2.1.1^ Artificial Neural Network^ ......................................................
2.1.2^ Biological Neural^ Network 2.1.3^ Brain^ vs.^ Computer^ -^ Comparison^ Between
Biological Neuron and ArtificialNeuron (Brain vs. Computer).........................^.....^................^.............. 14 2.2^ Evolution^ of^ Neural Networks^ .....................................................................
2.3^ Basic^ Modds^ of^ Artificial Neural Network..
2.3.1^ Connections..^...............................................
2.3.2^ Learning^ .....................^.^
2.3.2.1^ Supervised Learning^ ..............................................
2.3.2.2^ Unsupervised Learning^ .......................................................
2.3.2.3^ Reinforcement Learning..........^...................
2.3.3^ Activation Functions^ ......................................................................
2.4^ Important^ Terminologies^ of^ A.NNs^ ...............................................................
U!^ ~~---··············^ ......................................
M
2.4.5^ Momentum Factor^ ........................................................................
2.4.6^ Vigilance^ Parameter^ ....................................................................
2.4.7^ Notations^ ..................................................................................
2.5^ McCulloch-Pitrs^ Neuron......^
Contents
5.^ Unsupervised^ Learning^ Networks^ ..................................................................
J^ Kohonen^ Self-Organizing^ Feature^ Maps
.^ •.^!^ ;:;:;^ !:':.:c~~~··.·.··.·.·.·.·.·.·.·.·.·.·.·.·.·.·
..^ ·.····.·.·.·.·..^ ·.·.·.·.·.·.·.·.·.·.·.·.·.·..^ ·.·.··.·.··.·.·.·..^ ·.·.·.·.·.·.·.··..^ ·.·.·.·.·
..^ ·.·.·.·.^ :;~
~ "fi.^ ' .................................... 164 Contents^
xill 5.4.5.1^ LVQ2^ .........................................................................5.5 5.6 5.75.85.9 5.10 5.11 6.
196
Special^ Networks^ ......................................................................................
xiv Contents
XV
.8.4.4 (^) Fuzzy (^) Composition (^) ................ (^) ·.•· (^) ................................................... (^282)
-- (^) .. (^) ...........................
(^) - ........................... (^299)
Contents
L^417 Contents
xix
.... (^) :~ (^) ...................................... (^) f:···········
Contents
16.2.1^ Comparison^ of^ Fuzzy^ Systems^ with^ Neural
17.^ Applications^ of^ Soft^ Computing .....................
0~~ • .....^ ~ -c::' hltroduction
1 Learning^ Objectives^ -------------------, Scope^ of^ soft computing.Various components under soft^ computing.^ Description^ on^ artificial^ neural^ networkswith^ its^ advantages and applications. 11.1^ Neural-Networks
A neural^ necwork^ is^ a processing device, either an algorithm or an actual hardware, whose design
was inspired^ by^ the design and functioning^ of^ animal brains and components
thereof.^ The computing world has^ a lot to gain^ from^ neural^ necworks,^ also^ known
as^ artif~eial^ neural networks or neural net. The neu- ral^ necworks^ have^ the abili^ to^ lear^
le^ which makes them very flexible and^ powerfu--rFDr"' neural networks, there^ is^ no need^ to^ devise an algorithm to
erfo~2Pec1^ c^ {as^ -;-rt-r.rrir,ttlere^ it^ no need^ to^ understand the internal^ mec^ amsms o
at^ task.^ These networks^ are^ also^ well^ suited"llr'reir- time^ systents^ because_,.Ofth~G"'f~t"~~PonSe-^ ana-co~pmarional
times which^ are^ because^ of^ rheir parallel architecmre.Before^ discussing^ artiftcial^ neural^ net\Vorks,^
let^ us^ understand how the human brain^ works.
The human brain^ is^ an^ amazing^ processor. Its exact workings are still a
mystery.^ The^ most basic element^ of^ the human brain^ is^ a specific^ type^ of^ cell, known^
as^ neuron, which doesn't regenerate. Because neurons aren't slowly replaced,^ it^ is^ assumed^ that they provide
us^ with our abilities^ to^ remember, think and apply pre- vious experiences^ to^ our every action.^ The human brain comprises about
100 billion neurons.^ Each neuron can connect with^ up^ to^ 200,000^ orher neurons, although
1,000-10,000^ interconnections^ arc; typical.The^ -power^ of^ the^ human^ mind^ comes^
from^ the sheer^ numbers^ of^ neurons^ and^ their multiple
learning.^ There^ are^ over^100 different classes^ of^ neurons.^ The^ individual neurons^ are
complicated. They^ have^ a myriad^ of^ parts,^
subsystems and control mechanisms. They convey informacion via a host
of^ electrochemical pathways. Together these neurons and^ their conneccions^ form^ a
process^ which^ is^ not^ binary,^ not stable,^ and not syn- chronous. In short, it^ is^ nothing like the currently available elecuonic computers, or
even^ arcificial^ neUral networks.
lnlroduclion
An (^) artificial (^) neural (^) network (ANN) may be defined 1.1.1 (^) Artificial Neural Network: Definition (^) as (^) an infonnation·processing model (^) that (^) is (^) inspired
the (^) way (^) biological nervous systems, such (^) as (^) the brain, (^) process (^) information. (^) This (^) model (^) rriis (^) ro replicate only
(^) element (^) of (^) ANN (^) is (^) the (^) novel (^) structure (^) of (^) irs information
(^) elements
Anificial neural networks, (^) like (^) people, learn (neurons) (^) wo_rking (^) in unison (^) to (^) solve specific problems.
such (^) as (^) pattern recognition or data classification through a learning process. (^) In (^) biological systems, learning
involves adjustments (^) to (^) the (^) synaptic connections that (^) exist (^) between (^) the (^) neurons. (^) ANNs (^) undergo a similar
change that occurs when the concept (^) on (^) which they are (^) built (^) leaves the academic environment and is thrown
into (^) the harsher world (^) of (^) users who simply (^) wa~t (^) to get a job done (^) on (^) computers accurately all the (^) time.
Many neural networks now being designed are statistically quite accurate, (^) but (^) they still leave their users with
a bad (^) raste (^) as (^) they falter when it comes to solving-problems accurately.
Unfortunately, (^) few (^) applications tolerate (^) that (^) level (^) of (^) error.
Neural (^) networks, with their remarkable ability to derive meaning from complicated^ I (^) 1.1.2 (^) Advantages of Neural Networks (^) or (^) imprecise data, could
be used to extract patterns and detect trends (^) that (^) are too complex·ro be noticed by either humans (^) or (^) other
computer techniques. A trained neural network could be thought (^) of (^) as (^) an (^) "expert" (^) in a particular cat-
egory (^) of (^) information it (^) has (^) been given (^) m (^) an.Jyze. (^) This expert could be used to provide projections in
new situations (^) of (^) interest and answer (^) "what (^) if' (^) questions. (^) Other (^) advantages (^) of (^) worlcing (^) with (^) an (^) ANN
(^) m (^) learn how (^) to (^) do (^) taSks (^) based (^) on (^) the data given
(^) of (^) the information it receives
(^) out (^) in parallel. (^) Special (^) hardware devices are being
designed and manufactured to rake advantage (^) of (^) this capability (^) of (^) ANNs.
(^) Partial (^) destruction (^) of (^) a (^) neural (^) network leads to the
corrcseonding degradation (^) of (^) performance. However, (^) so~ (^) caP-@lfuies.may (^) .be (^) reJained even
Currently, neural ne[\vorks can't function (^) as (^) a user interface which translates spoken words .---·· after (^) major (^) ~e.~ (^) dam~e. (^) into (^) instructions
for a machine, (^) but (^) someday they would have rhis skilL (^) Then (^) VCRs, (^) home (^) security systems, (^) CD (^) players, and
word processors would simply be activated by voice. (^) Touch (^) screen (^) and (^) voice editing (^) would (^) replace the word
processors (^) of (^) today. Besides, spreadsheets and databases would be imparted such level (^) of (^) usability that would
be pleasing (^) co (^) everyone. But for now, neural (^) networks (^) are only entering the marketplace in niche areas where
(^) if (^) they succeed in
having (^) the (^) lowest bad loan rate. For these (^) instirurions,
the genuine loan applicants might be an improvement over their current selection (^) pro~ess. (^) Indeed, some banks
have proved that the failure rate (^) on (^) loans approved by neural networks (^) is (^) lower than those approved by (^) tkir t 1.2 (^) Application (^) Scope (^) of (^) Neural (^) Networks Figure 1 ~ (^1) The (^) multi-disciplinary point (^) of (^) view (^) of (^) neural nerworks.Control (^) theory (^) robotics(time (^) series, (^) data (^) mining) Image/signal (^) processing Economics/financeEngineering Statistical physicsDynamical (^) systemsPhysics optimization)(approximation (^) theory,MathematicsArtificial (^) intel~igenceComputer (^) science
best traditional methods. Also, some credit card (^) companies (^) are using neural (^) networks (^) in their application
screening process.
' (^) I (^) h1s (^) newest (^) method (^) of (^) looking into the future (^) by (^) analyzing past experiences (^) has (^) generated irs own unique
set (^) of (^) problems. (^) One (^) such problem (^) is (^) to provide a reason behind a computer·generated (^) answer, (^) say, (^) as (^) to
why a particular loan application (^) was (^) denied. To explain how a network learned (^) and (^) why it recommends (^) a
particular decision has been difficult. (^) The (^) inner workings (^) of (^) neural (^) networks (^) are (^) "black (^) boxes." (^) Some (^) people
have even called the use (^) of (^) neural (^) networks (^) "voodoo (^) engineering." To (^) justifY (^) the decision·making process,
several neural network tool makers have provided programs (^) that (^) explain which (^) input (^) through which node
Apart from filling the niche areas, neural nerwork's workwhich data plays a major role in (^) decision· (^) making and its imponance.dominates the decision-making process. From this information, experts in the application may be able to infer (^) is (^) also progressing in (^) orher (^) more promising
application areas. (^) The (^) next section (^) of (^) this chapter goes through some (^) of (^) these areas and briefly details
the current work. (^) The (^) objective (^) is (^) to (^) make (^) the reader aware (^) of (^) various possibilities where neural networks
might offer solutions, such (^) as (^) language processing, (^) character recognition, (^) image compression, pattern
Neural networks can be viewed from a multi-disciplinary poimrecognition, etc. ._-"'"^ of^ view^ as^ shown in Figure^ 1-l. (^) /
The (^) neural networks have good scope (^) of (^) being used in the following^ I (^) 1.2 (^) Application (^) Scope (^) of (^) Neural Networks (^) areas:
(^) and (^) speed (^) of (^) each radar blip
taken (^) as (^) input (^) to the nerwork. (^) The (^) output (^) would be the air (^) traffic (^) controller's instruction in response to
(^) an (^) easy task for a
neural network.
.Introduction
Genetic algorithm (^) (GA) (^) is (^) ieminiscent (^) of (^) sexual^ 11.4 (^) Genetic (^) Algorithm (^) reproduction (^) in (^) which the (^) genes (^) of (^) rwo (^) parents (^) combine
to (^) form (^) those (^) of (^) their children. When it (^) is (^) applied (^) ro (^) problem solving, the basic premise (^) is (^) that we (^) can
create (^) an (^) initial population (^) of (^) individuals represencing possible solutions (^) to (^) a problem (^) we (^) are (^) trying (^) ro (^) solve.
Each (^) of (^) iliese (^) individuals (^) has (^) certain (^) characteristics that
population. The more (^) fir (^) members (^) will (^) have (^) a higher probability of mating and producing (^) offspring (^) that (^) have
a significam chance (^) of (^) retaining the desirable characteristics
method (^) is (^) very (^) effective (^) at finding optimal or near-optimal (^) solurions (^) m (^) a (^) wide (^) variety (^) of (^) problems (^) because it
does (^) nm (^) impose many limirarions (^) required (^) by (^) traditional methods. It (^) is (^) an (^) elegant (^) generate-and-test strategy
(^) results (^) in (^) solutions that (^) are (^) globally (^) optimal
Genetic algorithms (^) are (^) adaptive (^) computational procedures modeled on theor^ nearly (^) so. (^) mechanics (^) of (^) natural (^) generic
systems. (^) They (^) express (^) their ability (^) by (^) efficiently (^) exploiting (^) the (^) historical (^) informacion (^) to (^) speculate on (^) new
offspring (^) with (^) expected (^) improved performance. (^) GAs (^) are (^) executed (^) iteratively on a (^) set (^) of (^) coded (^) solutions,
called (^) population, with three (^) basic (^) operators: selection/reproduction, (^) crossover (^) and (^) mutation. They (^) use
only (^) the (^) payoff (^) (objective (^) function) information (^) and (^) probabilistic transition (^) rules (^) for (^) moving (^) to (^) the (^) next
iteration. (^) They (^) are (^) different (^) from (^) most (^) of (^) the (^) normal (^) optimization (^) and (^) search (^) procedures (^) in (^) the (^) following
(^) only (^) the payoff information;
Since (^) a (^) GA (^) works (^) simulraneously (^) on (^) a (^) set (^) of (^) coded (^) solutions, it (^) has (^) very (^) little (^) chance (^) to (^) get (^) stuck (^) at
local (^) optima (^) when (^) used (^) as (^) optimization technique. (^) Again, (^) it (^) does (^) not (^) need (^) any (^) son (^) of (^) auxiliary (^) information,
like (^) derivative (^) of (^) the (^) optimizing function. (^) Moreover, (^) rhe (^) resolution (^) of (^) rhe (^) possible (^) search (^) space (^) is (^) increased
by (^) operating on (^) coded (^) (possible) (^) solutions (^) and not on (^) the (^) solutions (^) themselves. (^) Further, (^) this (^) search (^) space
need (^) not (^) be (^) continuous. (^) Recently, (^) GAs (^) are (^) finding (^) widespread (^) applications (^) in (^) solving (^) problems requiring
efficient (^) and (^) effecrive (^) search, (^) in (^) business, (^) scientific (^) and (^) engineering (^) circles (^) like (^) synthesis (^) of (^) neural netwmk
architecrures, (^) traveling (^) salesman (^) problem, graph coloring, scheduling, numerical optimization, and (^) pattern
Hybrid (^) systems (^) can (^) be (^) classified (^) into (^) three (^) different I (^) 1.5 (^) Hybrid Systems recognition (^) and (^) image (^) processing. (^) systems: (^) Neuro (^) fuzzy (^) hybrid (^) system; (^) neuron (^) generic
hybrid (^) system; (^) fuzzy (^) genetic (^) hybrid (^) systems. (^) These (^) are (^) discussed (^) in (^) detail in (^) the (^) following (^) sections.
A (^) neuro (^) fuzzy (^) hybrid (^) system (^) is (^) a (^) fuuy (^) system (^) that I (^) 1.5.1 (^) Neuro (^) Fuzzy (^) Hybrid Systems (^) uses (^) a learning algorithm (^) derived (^) from or inspired (^) by (^) neural
(^) neural
ne£Works (^) having (^) advantages (^) of (^) both which (^) are (^) listed (^) below.
(^) logical, (^) etc.). 1.5 (^) Hybrid (^) Systems
(^) operations.
Neuro (^) fuzzy (^) hybrid (^) systems (^) combine the (^) advantages
that (^) can (^) be (^) explained and understood, and (^) noural (^) networks, which (^) deal (^) with implicit.knowledge that (^) can (^) be
acquired (^) by (^) learning. Neural (^) nerwork (^) learning (^) provides (^) a (^) good (^) way (^) to (^) adjust the knowledge (^) of (^) the expert (^) (i.e.,
artificial (^) intelligence (^) system) (^) and automatically generate additional (^) fuzzy (^) rules (^) and membership functions
to (^) meet (^) certain specifications. (^) It (^) helps (^) reduce (^) design (^) time and (^) costs. (^) On (^) the other hand, (^) FL (^) enhances (^) the
generalization capability (^) of (^) a neural nerwork (^) system (^) by (^) providing more (^) reliable (^) output (^) when (^) extrapolation (^) is
Genetic algorithms (^) {GAs) (^) have (^) been (^) increasingly^ I (^) 1.5.2 (^) Neuro Genetic Hybrid Systemsneeded (^) beyond the (^) limirs (^) of (^) the (^) training data. (^) applied (^) in (^) ANN (^) design (^) in (^) several (^) ways: (^) topology opti-
mization, genetic training algorithms (^) ·and (^) control parameter optimization. (^) In (^) topology optimization, (^) GA
is (^) used (^) to (^) select (^) a (^) topology (^) (number (^) of (^) hidden (^) layers, (^) number (^) of (^) hidden (^) nodes, (^) interconnection parrern)
In (^) genetic (^) training algorithms, the learning (^) offor (^) the (^) ANN which (^) in (^) turn (^) is (^) trained (^) using (^) some training scheme, most commonly back propagation. (^) an (^) ANN (^) is (^) formu1ated (^) as (^) a weight optimization (^) prob-
lem, (^) usually (^) using (^) the (^) inverse (^) mean (^) squared error (^) as (^) a (^) fitness (^) measure. (^) Many (^) of (^) the control parameters
such (^) as (^) learning (^) rate, (^) momentum (^) rate, (^) tolerance
(^) In (^) addi-
tion, (^) GAs (^) have (^) been (^) used (^) in (^) many (^) other innovative (^) ways, (^) w (^) create (^) new (^) indicators (^) based (^) on (^) existing (^) ones,
select (^) good (^) indicators, (^) evolve (^) optimal trading (^) systems (^) and complement other techniques (^) such (^) as (^) fuzzy
The optimization abilities (^) of (^) GAs (^) are (^) used (^) to (^) develop I (^) 1.5.3 (^) Fuzzy (^) Genetic Hybrid Systems logic. (^) the (^) best (^) set (^) of (^) rules (^) to (^) be (^) used (^) by (^) a (^) fuzzy (^) inference
engine, (^) and (^) to (^) optimize (^) the (^) choice (^) of (^) membership functions. A particular (^) use (^) of (^) GAs (^) is (^) in (^) fuzzy (^) classifi-
cation (^) systems, (^) where (^) an (^) object (^) is (^) classified (^) on the (^) basis (^) of (^) the (^) linguistic (^) values (^) of (^) the (^) object attributes.
The (^) most (^) difficult part (^) of (^) building a (^) system (^) like (^) this (^) is (^) to (^) find (^) the (^) appropriate (^) ser (^) of (^) fuzzy (^) rules. (^) The
most (^) obvious (^) approach (^) is (^) to (^) obtain (^) knowledge from (^) experts (^) and translate (^) this (^) into a (^) set (^) of (^) fuzzy (^) rules. (^) But
this (^) approach (^) is (^) time consuming. (^) Besides, (^) experts (^) may (^) not (^) be (^) able (^) to (^) put their (^) knowledge (^) into an (^) appro·
priate (^) form (^) of (^) words. (^) A second approach (^) is (^) to obtain the (^) fuzzy (^) rules (^) through machine learning, (^) whereby
the (^) knowledge (^) is (^) auromatically extracted or deduced
search (^) over (^) all (^) {discrete) (^) fuzzy (^) subsets (^) of (^) an (^) interval and (^) has (^) features (^) which (^) make (^) it applicable (^) for (^) solving
(^) rules (^) for (^) a (^) fuu.y (^) system (^) where objects (^) are (^) classi-
fied (^) by (^) linguistic terms. Coding the (^) rules (^) genetically (^) enabies (^) the (^) system (^) to (^) deal (^) with (^) mulcivalue (^) FL (^) and (^) is
more (^) efficient (^) as (^) it (^) is (^) consistent with numeric ood.ing (^) of (^) fuzzy (^) examples. (^) The training (^) data (^) and randomly
generated (^) rules (^) are (^) combined (^) to (^) create (^) the (^) initial (^) population, (^) giving (^) a better (^) starring (^) point (^) for reproduc-
tion. (^) Finally, (^) a (^) fitness (^) function (^) measures (^) the (^) strength (^) of (^) the (^) rules, (^) balancing (^) the (^) quality (^) and (^) diversity (^) of (^) ilie
population.
a^
Introduction
11.6^ Soft Computing^ The^ two^ major^ problem-solving^ technologies include:1.^ hard computing;2.^ soft^ computing.^ Hard computing^ deals^ wirh.^ precise models where accurate
solmions^ are^ achieved^ quickly.^ On^ the other
for^ imprecision,^ uncenaincy^ and partial truth^ tO
achieve^ tractability,^ robustness,^ low^ solution cost and better
compuring^ involves^ parmership of^ several^ fields,^
the^ mosr^ imponam being neural^ nerworks,^ G~^ and
FL.^ Also included^ is^ the^ field^ of^ probabilistic reasoning, employed
for^ its^ uncertaincy control techniques.^ However,
this
neural^ nerworks^ and^ FL.^ A hybrid technique,^ in
fact,^ would inherit^ all^ the^ advantages,^ but^ won't^ have^ the^ less
desirable^ features^ of^ single^ soft computing componems.
It has^ to^ possess^ a good learning^ capacicy,^ a^ better
the problem^ of^ local^ extremes than neural^ nerworks.
base, which^ has^ a linguistic representation and a^ very^
low^ degree^ of computational complexity. An imponam thing about the constituents of^ soft computing^ is^ that they^ are^ complementary,^
not^ camper~ itive,^ offering their own advantages and techniques
to^ pannerships^ to^ allow^ solutions^ to^ otherwise unsolvable problems. The constituents^ of^ soft computing^ are
examined^ in^ turn, following which existing applications
of partnerships^ are^ described.^ "Negotiation^ is^ the communication^ process^
of a group of^ agents^ in^ order^ to^ reach^ a mutually accepted agreement^ on^ some^ matter."^ This^ definition^ is^
typical of the^ research^ being done into negotiation and
co~ ordination^ in^ relation^ to^ software^ agents.^ It^ is^ an
obvious^ necessity that when multiple agents interact,
they
attempt^ to^ son^ our^ any^ conflicts^ of^ resources^ or interest.It is important to appreciate rhar agents are owned^ and conrrolled^ by^ people^ in^ order^ to^ complete
tasks^ on their behalf.^ An^ exampl!^ of^ a possible multiple-agent-based negotiation scenario
is^ the competition between HARD^ COMPUTI~JGPrecise^ models^ l J^ I t t Symbolic^ Traditional logic^ numerical reaSDiling^ modeling^ and·search ~ditio~^
SOFT^ COMPUTING Approximatereasoning Figure 1·2 Problem-solving technologies.
1.7^ Summary^
picks^ up^ the phone and dials,^ an^ agent^ will^ com- municate on the consumer's behalf with^ all^ the
available^ nerwork^ providers.^ Each^ provider^ will
make^ an
rOurid_.df^ offers,^ network providers^ may^ wish^
to^ modifY their^ offer^ to make it more competitive. The new
offer^ is^ then submitted^ to^ the consumer agenr and the process^ continues until a conclusion^ is^ reached.·
One^ advantage^ of^ this^ process^ is^ that^ the provider
can dynamicaUy^ alter^ its^ pricing strategy^ to^ account
for^ changes in demand and^ competidon,^ therefore
imizing^ revenue.^ The consumer^ will^ obviously benefit
from^ the constant competition^ berween^ providers. Best^ of^ all,^ the^ process^ is^ emirely^ amonompus
as^ the agents embody and^ act^ on the^ beliefs
and^ con- straints of the parties they represent. Further
changes^ can^ be^ made^ to^ the protocol^ so^ that providers can^ bid^ low^ without being in danger of making a
loss.^ For^ example, if the consumer^ chooses^
to^ go with the^ lowest^ bid but^ pays^ the second^ lowest
price, this^ will^ rake^ away^ the incentive^ to^ underbid or overbid.Much^ of^ the negotiation theory^ is^ based^ around human behavior models and,
as^ a result, it^ is^ oft:en^ trans- lated using Distributed Artificial Intelligence techniques. The problems associated with machine negotiation are^ as^ difficult to^ solve^ as^ rhey^ are^ wirh human negotiation and
involve^ issues^ such^ as^ privacy,^ security and deception. 11.1^ Summary The computing world^ has^ a^ lot^ to^ gain^ from^ neural networks
whose^ ability^ to^ learn^ by^ example^ makes^ them very^ flexible^ and powerful. In^ case^ of^ neural^ nerworks,
there^ is^ no^ need^ to^ devise^ an^ algorithm^ to^ perform
a specific^ task,^ i.e., there^ is^ no need^ to^ understand the imernal mechanisms
also^ well^ suited^ for^ real-time^ systems^ because^ of their
fast^ response^ and computational^ times,^ which^ are
due to^ their parallel architecture.Neural^ nerworks^ also^ contribute^ to^ other^ areas
of^ research^ such^ as^ neurology and^ psychology.^
They^ are regularly^ used^ tO^ model parts^ of^ living organisms and
to^ investigate the internal mechanisms^ of^ the brain. Perhaps^ the most exciting aspect of neural^ nerworks
is^ the^ possibility that someday^ "conscious"^ networks n:aighr^ be^ produced.^ Today,^ many scientists^ believe
that consciousness^ is^ a^ "mechanical"^ property and that "conscious"^ neural^ nerworks^ are^ a realistic possibility.Fuzzy^ logic^ was^ conceived^ as^ a^ better^ method^ for
sorting and handling^ data^ but^ has^ proven^ to^ be^ an
excellent choice^ for^ many control^ system^ applications since
it^ mimics human comrollogic.^ It^ can^ be^ built^ inro
anything from^ small, hand-held producrs^ to^ large,^ computerized
process^ control^ systems.^ It^ uses^ an^ imprecise but
very descriptive language^ to^ deal with input^ data^ more like a human operator.
It^ is^ robust and often^ works^ when first implemented with^ little^ or no tuning.When applied to optimize ANNs^ for^ forecasting and classification problems,
GAs^ can^ be^ used^ to^ search for^ the right combination^ of^ inpur^ data, the most suitable
forecast^ hori7:0n,^ the^ optimal or near-optimal network^ interconnection patterns and weights among the neurons, and the conuol parameters (learning
~te, momentum rate, tolerance^ level,^ etc.) based on the uaining
data^ used^ and the^ pre~set^ criteria.^ Like^ ANNs,
in^ many^ cases,^ you^ can^ arrive^ at an acceprable solution without^ die^ rime and^ expense^ of^ an^ exhaustive^ search.^ Soft^ computing^ is^ a^ relatively^ new concept, the term
really^ entering general^ circulation^ in^ 1994, coined
by
Berkeley,^ USA,^ it^ encompasses^ several^ fields^ of^ comput- ing. The three that^ have^ been examined in this chapter
are important for their ability^ to^ adapt and learn,^ FL^
for^ its exploitation^ of^ partial^ truth and imprecision, and
~^ Artificial^ Neural^ Network:^ An
Introduction x,^ X,^ ... ~@-r Figure^ 2·1^ Architecture^ of a^ simple^ anificial^ neuron
net. Input^ '(x)^ :-^ ~-^ i^ '^ (v)l----~•·mx^ ~--.J^ Figure 2·2^ Neural^ ner^ of^ pure^ linear equation. It should be noted that each neuron has an imernal stare^ of^ its^
own. This imernal^ stare^ is^ called^ ilie
of^ the.^ inputs the neuron^ receives.^ The activation signal^ of^ a neuron^ is^ transmitted to other neurons.
Remembe(i^ neuron can send only one signal at a rime, which^ can^ be^ transmirred to^ several^ ocher^ neurons.To^ depict^ rhe^ basic^ operation^ of^ a neural net,^ ·consider
a set of neurons,^ say^ X^ and^ Xz,^ transmitting^ signals^1
are connected to^ the output neuron^ Y,^ over^ a weighted interconnection^ links^ (W,^ and^ W2)^ as^ shown in
Figure^ 2·1. For the above simple rleuron net architecture, the net^ input^ has^ to^ be^ calculated in^ the^ following
way: ]in=^ +XIWI^ +.xz102 where Xi and X2 ,gL~vations of the input neurons^ X,^ and^ X2,^ i.e.,^ the output^ of^ input
signals.^ The
which^ will^ be^ discussed^ in^ the^ forthcoming^ sect
_.^ e a^ ave^ calculation of^ the^ net input^ is^ similar
tq^ the calculation^ of^ output^ of^ a pure^ linear^ straight line equation
is^ as^ shown^ in^ Figure^ 2·2.^ Here,^ m^ oblain^ the output^ y,^ the^ slope^ m^ is
directly multiplied with the input^ signal.^ This
is^ a^ linear equation. Thus, when slope and input^ are^ linearly
Figure^ 2·3.^ This^ shows^ that^ the weight^ involved
in^ dte^ ANN^ is^ equivalent^ to^ the^ slope^ of^ the^ linear
straight line. I^ 2.1.2^ Biological^ Neural^ Network It iswdl·known that^ dte^ human brain^ consists^ of
(^11) a huge number of neurons, approximatdy 10 ,^ with numer· ous^ interconnections. A schematic diagram^ of^ a biological neuron
2.1^ Fundamental^ Concept^ f
Slope=^ t m y^ ,^ X----->-^ Figure 2·3^ Graph^ for^ y^ =^ mx.^ Synapse Nucleus^ --+--0^7 /•^ v, -c_....--^.^11 DEindrites^ 1:^ t^ Figure^ 2-4^ Schcmacic^ diagram^ of^ a^ biological^ neuron.The biological neuron depicted in^ Figure^ 2-4^ consists^ of^ dtree^ main^ pans: 1. Soma or cell body- where the^ cell^ nucleus^ is^ located.2. Dendrites- where the nerve^ is^ connected^ ro^ the^ cell^ body.3. Axon- which carries ~e impu!_s~=-;t^ the neuron.Dendrites are tree-like networks made^ of^ nerve^ fiber^ connected to the^ cell^
body.^ An^ axon^ is^ a single, long conneC[ion^ extending^ from^ the^ cell^ body and carrying
signals^ from^ the neuron. The end^ of^ dte^ axon^ splits into fineruands.^ It^ is^ found that each strand terminates into a
na^ se^ that^ e neuron^ introduces^ its^ si^ nals^ to^
euro^ ,^ T^ e^ receiving^ ends o^ e^ a^ ses ~}be^ nrarhr^ neurons^ can^ be^ un^ both^ on the dendrites and on
:,}_er^ neuron^ in^ me numan^ Drain. ~es^ are^ passed^ between the^ synapse^ and the dendrites. This type
of^ signal uansmission^ involves a.^ chemical^ process^ in which specific^ transmitter^ substances
are^ rdeased from the sending side^ of^ the^ junccio
inside the^ bOdy^ of the^ receiving^ cell.^ If^ the^ dectric potential^ reaches^ a^ threshold^ then the receiving cell
and
to^ wait for^ a period^ of^ time^ called^ th^ efore
the^ firing^ of^ the^ receiving^ cell.^ -·---""
~
X,^ x, ~^ Inputs^14 OutpmAxonSomaNee (^) inpurDendritesWeights (^) or (^) inrerconnecrionsCell^ NeuronBiological neuron^ Anificial (^) neuron^ biological^ and artificial neurons^ Table^ 2·1^ Terminology^ relarioii:ShrpS (^) b~tw-ee·n^ Figure^ 2·5^ Mathematical (^) model (^) of (^) artificial (^) neuron. ~/ element^ w,^ Processing^ ";^ ~^ Weights Artificial (^) Neural (^) Network: (^) An (^) Introduction
Figure (^) 2~5 (^) shows (^) a mathematical (^) represenracion (^) of (^) the (^) above~discussed (^) chemical processing raking (^) place
In (^) chis (^) model, (^) the (^) net (^) input (^) is (^) elucidated (^) as in^ an artificial neuron. i=l^ Yin^ =^ Xt^ WJ^ +^ XzW2^ +^ ·^ ··^ +^ x,wn^ =^ L (^) x;w; "
The (^) activation function (^) is (^) applied over (^) it (^) ro (^) calculate the
output. The (^) r-reighc (^) represents (^) the (^) strength (^) of (^) synapse (^) connecting (^) the (^) input (^) and (^) the (^) output (^) neurons. (^) ft (^) pos·
irive (^) weight corresponds (^) to (^) an (^) excitatory (^) synapse, (^) and a (^) negative (^) weight corresponds (^) to (^) an inhibitory
The (^) terms (^) associated (^) with (^) the (^) biological (^) neuron and theirsynapse. (^) counterparts (^) in (^) artificial neuron (^) are (^) prescmed
and (^) artificial neurons (^) on (^) the (^) basis (^) of (^) the following criteria:
biolog-
ical (^) neuron (^) ir (^) is (^) of (^) a (^) few (^) millisecondS. (^) Hence, the artificial neuron (^) modeled (^) using a (^) com purer (^) is (^) more
faster.
l I^ '!.^ j 2. f (^) Fundamental (^) Concept .J
(^) perform (^) massive (^) paralld operations (^) simulraneously. (^) The
artificial neuron can (^) also (^) perform (^) several (^) parallel operations
(^) in the brain (^) is (^) about (^) lOll and (^) the (^) total number of
interconnections (^) is (^) about (^10) • (^) Hence, (^) it can (^) be (^15) (^) rioted (^) that the (^) complexity (^) of (^) the (^) brain (^) is (^) comparatively
higher, (^) i.e. (^) the computational work (^) takes (^) places not"Cmly (^) in the brain (^) cell (^) body, (^) but (^) also (^) in (^) axon, (^) synapse,
ere. (^) On (^) the other hand, the (^) size (^) and (^) complOciry (^) ofan (^) ANN (^) is (^) based (^) on (^) the (^) chosen application and
the (^) ne[INork (^) designer. (^) The (^) size (^) and complexity
(^) arcificial
(^) in (^) its (^) imerconnections or in
synapse (^) strength but (^) in (^) an (^) artificial neuron (^) it (^) is (^) smred (^) in (^) its (^) contiguous memory locations. In an (^) artltlcial
neuron, the continuous loading (^) of (^) new (^) information (^) may (^) sometimes overload (^) the (^) memory locations.
result, (^) some (^) of the (^) addresses (^) containing (^) older memory (^) locations (^) may (^) be (^) destroyed. But (^) in (^) case (^) of the brain,
new (^) information can (^) be (^) added in the interconnections (^) by (^) adjusting the (^) strength (^) without (^) descroying (^) the
older (^) infonnacRm. (^) A disadvantage related to brain (^) is (^) that sometimes (^) its (^) memory (^) niay (^) fail (^) to (^) recollect (^) the.
stored information (^) whereas (^) in (^) an (^) artificial neuron,
it (^) can (^) be (^) retrieved. (^) Owing (^) to (^) these (^) facts, (^) rhe (^) adaptability
-^ is^ more (^) toward an (^) artificial (^) neuron.
fault tolerant capability (^) whereas (^) the artificial neuron (^) has (^) no
fault (^) tolerance. (^) Th (^) distributed natu (^) of (^) the (^) biological neurons enables to store and (^) retrieve (^) information
even (^) when (^) the (^) interconnections m (^) em (^) get (^) disconnected. Thus biological neurons (^) nc (^) fault (^) toleF.lm. (^) But (^) in
Biological (^) neurons (^) can (^) accept redundancies, whichcase (^) of (^) artificial neurons, (^) the (^) mformauon (^) gets (^) corrupted if the network interconnections are disconnected. (^) is (^) not possible in artificial neurons. (^) Even (^) when (^) some
ceHs (^) die, (^) the (^) human nervous (^) system (^) appears (^) to (^) be (^) performing with (^) the (^) same (^) efficiency.
(^) using (^) a computer, there (^) is (^) a control unit present (^) in
(^) values (^) from (^) unit (^) to (^) unit, (^) bur (^) there
is (^) no (^) such control unit (^) for (^) monitoring in (^) the (^) brain. The (^) srrengdl (^) of a neuron (^) in (^) the brain depends on (^) the
active (^) chemicals present and whether neuron connections are strong or (^) weak (^) as (^) a result (^) ~mre (^) layer
rather (^) t~ (^) synapses. (^) However, (^) rhe (^) ANN (^) possesses (^) simpler interconnections (^) and (^) is (^) freefrom
chemical actions similar to those raking (^) place (^) in (^) brain (biological neuron). Thus, (^) the (^) control mechanism
of (^) an (^) arri6cial neuron (^) is (^) very (^) simple compared (^) to (^) that (^) of (^) a biological neuron. (^) --
So, (^) we (^) have (^) gone through a comparison between ANNs and biological neural (^) ne[INorks. (^) In shan, (^) we (^) can
The above-mentioned (^) characteristic.s (^) make (^) the (^) ANNsbe (^) noted that no single neuron (^) carries (^) specific information. (^) as (^) connectionist models, parallel distributed processing
models, (^) self-organizing (^) systems, (^) neuro-computing -- (^) systems (^) and (^) neuro-morphic (^) systems.
Artificial (^) Neural (^) Network: (^) An (^) Introduction Figure (^) 2·7 (^) Multilayer (^) feed-forward^ layer^ Input Input Output (^) network. ~^ --------.. -£
Figure 2·8 (^) (A) (^) Single node (^) wirh (^) own (^) feedback. (^) {B) (^) Comperirive (A) (^) (B) -£ Feedback (^) ners. neurous Output
nodes (^) with (^) various weighrs, resulting (^) in&MJ.n)rnp;u~~~eope~
(^) The
input layer (^) is (^) that which receives the input and this layer (^) has (^) no function except buffering (^) the (^) input si nal.
The (^) output (^) layer generates the (^) output (^) of (^) the network. (^) Any (^) layer that (^) is (^) formed between (^) e input (^) and output
(^) internal (^) to (^) the (^) network and (^) has (^) no (^) direct contact (^) with (^) the
external (^) environment. (^) It should (^) be (^) noted that there (^) may (^) be (^) zero (^) to (^) several (^) hidden (^) layers (^) in (^) an (^) ANN. More the
number (^) of (^) the (^) hidden (^) layers, (^) more (^) is (^) Ute (^) com (^) lexi (^) f (^) Ute (^) network This (^) may, (^) however, (^) provide (^) an (^) efficient
output (^) response. (^) In (^) case (^) of out (^) ut (^) from (^) one (^) layer (^) is (^) connected to (^) d
A (^) n'etw.Qrk (^) is (^) said (^) m (^) be (^) a (^) feed~forward (^) nerwork^ evlill' (^) node in (^) the (^) next (^) layer. (^) if no neuron (^) in (^) the (^) output (^) layer (^) is (^) an (^) input to a node (^) in
the (^) same (^) layer (^) or in the preceding (^) layer. (^) On (^) the other hand, when ou (^) uts (^) can (^) be (^) directed (^) back as (^) inputs (^) to
same (^) or pr..t:eding (^) layer (^) nodes then (^) it (^) results (^) in (^) me If (^) the (^) feedback (^) of (^) clte (^) om put (^) of (^) clte (^) processing (^) elements (^) ts (^) · recred (^) back (^) at (^) input (^) tO (^) the (^) processing (^) formation (^) e (^) back (^) networ. (^).
(^) feedbi:Uk. (^) Recurrent networks (^) are (^) feedback networks
with (^) d('ied (^) loop. (^) Figure (^) 2~8(A) (^) shows (^) a (^) simple (^) recurrent neural network having a single neuron with I^ j 2.3 (^) Basic (^) Models (^) of (^) Artificial (^) Neural (^) Network
Figure (^) 2·9 (^) Single·layer (^) recurrent^0 wnm (^) ..^ 0>··^ ~"··^ ®········^ ..^ ··~~! (^) ... (^) network.
(^0) "" (^) Vn^ Input (^) layer .· .. <:.:..::,/....... ··, Y, (^) )---'----- Figure (^) 2·10 (^) Multilayer (^) recurrent^ 0·······....~n2^0 "'" v~ X •"\ (^) -:::: (^) ne[Work.
feedback (^) to (^) itself. (^) Figure (^) 2~9 (^) shows (^) a single· (^) layer (^) network with a feedback connection in which a processing
element's output can be directed back (^) ro (^) the (^) processing element itself or to (^) clte (^) other processing element or
The architecture (^) of (^) a (^) competitive (^) layer (^) is (^) shown into both. (^) Figure (^) 2~8(8), (^) the competitive interconneccions having
and (^) will (^) be (^) discussed in the unsupervised learning network
category. (^) Apart from the network architectures (^) discussed (^) so (^) far, (^) there (^) also (^) exists (^) another type (^) of (^) archirec~
rure (^) with lateral feedback, which (^) is (^) called (^) the on·center--off-surround (^) or (^) latmzl (^) inhibition (^) strUCture. (^) In this
~ (^) ----
20 r< (^0) '
Artificial^ Neural^ Network:^ An^ Introduction ,, o;,L^1 ;~''^?^ rl^ 'h-'"'""==::':::C:=:=:':: 1.•^ __^ ~.~;-.^ Flgure2-11~on.&~r~ c<^ ,.^ ,'^ -~-' o'^ ..;'s~?ucture,^ each^ processing^ neuron^ receives c~ ·'
two^ differem^ classes^ of^ inputs-^ "excitatory"^ input
processing elements and^ "inhibitory"^ inputs^ from
more^ disramly_lggted..pro@~^ elements. This
cype^ of
In^ Figure^ 2-11, the connections with open^ circles
are^ excitatory connections and^ the^ links^ with^ solid
con- nective^ circles^ are^ inhibitory connections.^ From^ Figure
2-10,^ it^ can^ be^ noted that a^ processing^ element output can^ be^ directed^ back^ w^ the^ nodes^ in^ a^ preceding
in^ these networks,^ a^ processing^ dement output^ can^ be^ directed
pro- cessing^ elemenrs^ in^ the^ same^ layer.^ Thus,^ the^ various
network architecrures^ as^ discussed^ from^ Figures^ 2~6-2· can^ be^ suitably^ used^ for^ giving^ effective^ solution
ro^ a problem^ by^ using^ ANN. I^ 2.3,2^ Learning^ The^ main^ property of^ an^ ANN^ is^ its^ capability^ to
learn.^ Learning or training^ is^ a^ process^ by^ means
of^ which a neural network adapts^ itself^ to^ a stimulus^ by^ making$rop~~rer
adjustm~^ resulting^ in^ the^ production of^ desired^ response.^ Broadly,^ there^ are^ nvo^ kinds
o{b;ning^ in^ ANNs:
weights^ in^ a^ neural^ net.
network^ structure^ (which^ includes^ the^ number^ of
processing elemems^ as^ well^ as^ rheir^ connection^ types).The^ above^ two^ types^ oflearn.ing^ can^ be^ performed^ simultaneously
or^ separately.^ Apart^ from^ these^ two^ categories of^ learning, the learning^ in^ an^ ANN^ can^ be^ generally
classified^ imo three^ categories^ as:^ supervised^ learning; unsupervised^ learning;^ reinforcement^ learning.^ Let
us^ discuss^ rhese^ learning^ types^ in^ detail.
teacher.^ Let^ us^ take^ the^ example^ of^ the^ learning
process of^ a^ small^ child. The child^ doesn't^ know^ how^ to
readlwrite. He/she^ is^ being taught^ by^ the^ parenrs
at home and^ by^ the^ reacher^ in^ school.^ The children^ are^ trained and molded
to^ recognize^ rhe^ alphabets, numerals,^ etc. Their^ each^ and^ every^ action^ is^ supervised^ by^ a^ teacher.
Acrually,^ a^ child^ works^ on^ the^ basis^ of^ the output
that he/She^ has^ to^ produce.^ All^ these^ real-time^ events^
involve^ supervised learning^ methodology.^ Similarly,
in^ ANNs following^ the^ supervised^ learning,^ each^ input vector
re^ uires^ a cor^ din^ rar^ et^ vector,^ which^ represents the^ desired^ output. The input^ vecror^ along with^
informed^ precisely^ about what should^ be^ emitted
as^ output. The block^ 1a working^ of^ a supervised learning network.During training.^ the^ input^ vector^ is^ presented
to^ the network, which^ results^ in^ an^ output^ vecror.
This outpur vector^ is^ the actual output^ vecwr.^ Then^ the
actual^ output vector^ is^ compared with the desired
(target) output^ ·vector.^ If there^ exists^ a difference^ berween
the^ two^ output^ vectors^ then^ an^ error^ signal^ is^ generated
Neural^ Network^ Neural^ -+networkX(lnpu :)^ w^ ErrorError^ <^ (0-Y)signal^ signalsgenerator^ Figure^ 2-12^ Supervised^ learning. 2.3,2,2 Unsupervised Learning
The learning^ here^ is^ performed without^ the^ help
of^ a^ teacher.^ Consider the learning^ process^ of^ a tadpole, it learns^ by^ itself,^ that^ is,^ a^ child^ fish^ learns^ to^ swim
learning process^ is^ independent and^ is^ nor^ supervised^ by
a^ teacher.^ In^ ANNs^ following^ unsupervised learning,
the
'~^ group^ looks^ or^ to^ which group a number^ beloogf
n^ e training^ process,^ efietwork^ receives^ rhe
input
clusters.^ When a^ new^ input panern^ is^ applied,
the^ neural ··^ network^ gives^ an^ output^ response^ i^ dicar.ing..ili_~c
which^ the^ input^ pattern^ belongs.^ If^ for^
an^ input, a pattern^ class^ cannot^ be^ found^ the^ a^ new^ class
is^ generated The block^ 1agram^ of^ unsupervised learning
is shown^ in^ Figure^ 2~13.^ From^ Figure^ 2·13^ it^ is^ clear^ that^ there^ is^ no^
feedback^ from^ the^ environment^ to^ inform what the outputs should^ be^ or whether^ the^ outputs^ are^ correct.^ In
this^ case,^ the network must itself^ discover^ patterns~~ lariries,^ features^ or^ categories^ from^ the^ input data
and^ relations^ for^ the^ input data^ over^ (heOUtj:lut.
While discovering^ all^ these^ features,^ the network undergoes
change^ m^ Its^ parameters.^ I^ h1s^ process^ IS^ called
self
by^ discovering^ similarities and dissimilarities among the objects. 2.3.2.3^ Reinforcement Learning This learning^ process^ is^ similar^ ro^ supervised^ learning.
In^ the^ case^ of^ supervised learning, the correct^
rarget output^ values^ are^ known^ for^ each^ input^ pattern.
But,^ in^ some^ cases,^ less^ information might be
available.