dc.contributor.author | CAHILL, VINNY | en |
dc.date.accessioned | 2008-04-27T07:54:13Z | |
dc.date.available | 2008-04-27T07:54:13Z | |
dc.date.issued | 2005 | en |
dc.date.submitted | 2005 | en |
dc.identifier.citation | Dowling J., Curran E., Cunningham R., Cahill V., Using Feedback in Collaborative Reinforcement Learning to Adaptively Optimise MANET Routing, IEEE Transactions on Systems, Man, and Cybernetics - Part A, 35, 3, 2005, 360, 372 | en |
dc.identifier.other | Y | en |
dc.description | PUBLISHED | en |
dc.description.abstract | Designers face many system optimization problems
when building distributed systems. Traditionally, designers have
relied on optimization techniques that require either prior knowledge
or centrally managed runtime knowledge of the system?s
environment, but such techniques are not viable in dynamic
networks where topology, resource, and node availability are
subject to frequent and unpredictable change. To address this
problem, we propose collaborative reinforcement learning (CRL)
as a technique that enables groups of reinforcement learning
agents to solve system optimization problems online in dynamic,
decentralized networks. We evaluate an implementation of CRL
in a routing protocol for mobile ad hoc networks, called SAMPLE.
Simulation results show how feedback in the selection of links
by routing agents enables SAMPLE to adapt and optimize its
routing behavior to varying network conditions and properties,
resulting in optimization of network throughput. In the experiments,
SAMPLE displays emergent properties such as traffic flows
that exploit stable routes and reroute around areas of wireless
interference or congestion. SAMPLE is an example of a complex
adaptive distributed system. | en |
dc.description.sponsorship | This work was supported in part by the TRIP project funded under the
Programme for Research in Third Level Institutions (PRTLI) administered by
the Higher Education Authority of Ireland and in part by the European
Union funded ?Digital Business Ecosystem? Project IST-507953. | en |
dc.format.extent | 360 | en |
dc.format.extent | 372 | en |
dc.format.mimetype | application/pdf | |
dc.language.iso | en | en |
dc.relation.ispartofseries | IEEE Transactions on Systems, Man, and Cybernetics - Part A | en |
dc.relation.ispartofseries | 35 | en |
dc.relation.ispartofseries | 3 | en |
dc.rights | Y | en |
dc.subject | Feedback | en |
dc.subject | learning systems | en |
dc.subject | mobile ad hoc network | en |
dc.subject | routing | en |
dc.title | Using Feedback in Collaborative Reinforcement Learning to Adaptively Optimise MANET Routing | en |
dc.type | Journal Article | en |
dc.type.supercollection | scholarly_publications | en |
dc.type.supercollection | refereed_publications | en |
dc.identifier.peoplefinderurl | http://people.tcd.ie/vjcahill | en |
dc.identifier.rssinternalid | 23382 | en |
dc.identifier.rssuri | http://ieeexplore.ieee.org/iel5/3468/30695/01420665.pdf?tp=&isnumber=&arnumber=1420665 | en |
dc.identifier.rssuri | http://ieeexplore.ieee.org/iel5/3468/30695/01420665.pdf?tp=&isnumber=&arnumber=1420665 | |
dc.identifier.rssuri | http://ieeexplore.ieee.org/iel5/3468/30695/01420665.pdf?tp=&isnumber=&arnumber=1420665 | en |
dc.identifier.uri | http://hdl.handle.net/2262/16502 | |