CS5811: Advanced Artificial Intelligence
Fall 2009 Presentation Information


Congyi

Friday, Nov. 20, 2009

Paper: "Generating Useful Network-based Features for Analyzing Social Networks."
Jun Karamon, Yutaka Matsuo and Mitsuru Ishizuka.
In the Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (AAAI-2008), pages 1162-1168, 2008.

Reason for choice: My research focuses on Vehicular Ad hoc Networks (VANETs) and I will work on the topic of data aggregation and data mining in VANETs. Recently, social networks have been very popular and I am very interested in it. I think the research in social networks may lead to some interesting ideas and heuristics for my future research.

Abstract: Recently, many Web services such as social networking services, blogs, and collaborative tagging have become widely popular. Many attempts are being made to investigate user interactions by analyzing social networks among users. However, analyzing a social network with attributional data is often not an easy task because numerous ways exist to define features through aggregation of different tables. In this study, we propose an algorithm to identify important network-based features systematically from a given social network to analyze user behavior efficiently and to expand the services. We apply our method for link-based classification and link prediction tasks with two different datasets, i.e., an @cosme (an online viral marketing site) dataset and a Hatena Bookmark (collaborative tagging service) dataset, to demonstrate the usefulness of our algorithm. Our algorithm is general and can provide useful network-based features for social network analyses.

URL: Download the paper at this location

PRESENTATION: ( .pdf )

Kit

Monday, Nov. 30, 2009

Paper: "Layered Intelligence for Agent-based Crowd Simulation."
Bikramjit Banerjee, Ahmed Abukmail, and Landon Kraemer.
In Simulation, Oct 2009; vol. 85: pp. 621 - 633.

Reason for choice: I chose this paper because I have great interest in doing agent-based modeling (especially on big parallel hardware), and this paper does a number of things: it has a clear link to AI topics we have been discussing; is a relatable example of agent-based modeling; and is extremely recent (published last month!). I hope to both gain and impart some knowledge and appreciation for ABM, as well as get some idea of what the state-of-the-art is so that I may start planning for future research.

Abstract: We adapt a scalable layered intelligence technique from the game industry, for agent-based crowd simulation. We extend this approach for planned movements, pursuance of assignable goals, and avoidance of dynamically introduced obstacles/threats as well as congestions, while keeping the system scalable with the number of agents. We demonstrate the various behaviors in hall-evacuation scenarios, and experimentally establish the scalability of the frame rates with increasing numbers of agents.

URL: Download the paper at this location

PRESENTATION: ( .pdf )

Dean

Monday, Nov. 30, 2009

Paper: "Genetics-based Machine Learning and Behavior Based Robotics: A New Synthesis."
Marco Dorigo, Uwe Schnepf.
In IEEE Transactions on Systems, Man, and Cybernetics, 23(1), 141-154, January 1993.

Reason for choice: I chose this paper because I am interested in AI that learns behaviors. The paper deals with this in relation to robotics, which is an interesting application, but it could also be applied to agents in virtual worlds.

Abstract: Intelligent robots should be able to use sensor information to learn how to behave in a changing environment. As environmental complexity grows, the learning task becomes more and more difficult. We face this problem using an architecture based on learning classifier systems and on the structural properties of animal behavioural organization, as proposed by ethologists. After a description of the learning technique used and of the organizational structure proposed, we present experiments that show how behaviour acquisition can be achieved. Our simulated robot learns to follow a light and to avoid hot dangerous objects. While these two simple behavioural patterns are independently learnt, coordination is attained by means of a learning coordination mechanism. Again this capacity is demonstrated by performing a number of experiments.

URL: Download the paper at this location

PRESENTATION: ( .pdf )

Antti

Wednesday, Dec. 2, 2009

Paper: "Acquiring user tradeoff strategies and preferences for negotiating agents: A default-then-adjust method"
Xudong Luo, Nicholas R. Jennings, Nigel Shadbolt.
In International Journal of Human-Computer Studies, Volume 64, Issue 4, April 2006, Pages 304-321.

Reason for choice: As an IT major I have always been interested in online services, and practical applications of AI research. As webpages become more standardized and if the semantic web makes a breakthrough I see more intelligent agents working on the behalf of the user. An agent using some of these algorithms might be able to concentrate online shopping and deal-seeking from different vendors into a single application.

Abstract: A wide range of algorithms have been developed for various types of negotiating agents. In developing such algorithms the main focus has been on their efficiency and their effectiveness. However, this is only a part of the picture. Typically, agents negotiate on behalf of their owners and for this to be effective the agents must be able to adequately represent their owners' strategies and preferences for negotiation. However, the process by which such knowledge is acquired is typically left unspecified. To address this problem, we undertook a study of how user information about negotiation tradeoff strategies and preferences can be captured. Specifically, we devised a novel default-then-adjust acquisition technique. In this, the system firstly does a structured interview with the user to suggest the attributes that the tradeoff could be made between, then it asks the user to adjust the suggested default tradeoff strategy by improving some attribute to see how much worse the attribute being traded off can be made while still being acceptable, and, finally, it asks the user to adjust the default preference on the tradeoff alternatives. This method is consistent with the principles of standard negotiation theory and to demonstrate its effectiveness we implemented a prototype system and performed an empirical evaluation in an accommodation renting scenario. The result of this evaluation indicates the proposed technique is helpful and efficient in accurately acquiring the users' tradeoff strategies and preferences.

URL: Download the paper at this location

PRESENTATION: ( .pdf )

Reid

Wednesday, Dec. 2, 2009

Paper: "People detection and tracking using stereo vision and color."
Rafael Munoz-Salinas, Eugenio Aguirre, Miguel Garcia-Silvente.
In Image and Vision Computing, Volume 25, Issue 6 (2007), pp. 995-1007.

Reason for choice: It's a research paper I find interesting.

Abstract: People detection and tracking are important capabilities for applications that desire to achieve a natural human–machine interaction. Although the topic has been extensively explored using a single camera, the availability and low price of new commercial stereo cameras makes them an attractive sensor to develop more sophisticated applications that take advantage of depth information. This work presents a system able to visually detect and track multiple people using a stereo camera placed at an under-head position. This camera position is especially appropriated for human–machine applications that require interacting with people or to analyze human facial gestures. The system models the background as height map that is employed to easily extract foreground objects among which people are found using a face detector. Once a person has been spotted, the system is capable of tracking him while is still looking for more people. Our system tracks people combining color and position information (using the Kalman filter). Tracking based exclusively on position information is unreliable when people establish close interactions. Thus, we also include color information about the people clothes in order to increase the tracking robustness. The system has been extensively tested and the results show that the use of color greatly reduces the errors of the tracking system. Besides, the people detection technique employed, based on combining plan-view map information and a face detector, has proved in our experimentation to avoid false detections in the tests performed. Finally, the low computing time required for the detection and tracking process makes it suitable to be employed in real time applications.

URL: Download the paper at this location

PRESENTATION: ( .pdf )

Yi

Friday, Dec. 4, 2009

Paper: "Completeness and Optimality Preserving Reduction for Planning."
Y. Chen and G. Yao.
In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-09), 2009.

Reason for choice:

Abstract:

URL: Download the paper at this location

PRESENTATION: ( .pdf )

Xiao

Friday, Dec. 4, 2009

Paper: "On the Partial Observability of Temporal Uncertainty."
Michael D. Moffitt.
In Proceedings of the 22nd National Conference on Artificial Intelligence (AAAI 2007) Vancouver, British Columbia, Canada Volume 2, Pages 1031-1037.

Reason for choice: I chose the paper because I want to learn more about temporal uncertainty.

Abstract: We explore a means to both model and reason about partial observability within the scope of constraint-based temporal reasoning. Prior studies of uncertainty in Temporal CSPs have required the realization of all exogenous processes to be made entirely visible to the agent. We relax this assumption and propose an extension to the Simple Temporal Problem with Uncertainty (STPU), one in which the executing agent is made aware of the occurrence of only a subset of uncontrollable events. We argue that such a formalism is needed to encode those complex environments whose external phenomena share a common, hidden source of temporal causality. After characterizing the levels of controllability in the resulting Partially Observable STPU and various special cases, we generalize a known family of reduction rules to account for this relaxation, introducing the properties of extended contingency and sufficient observability. We demonstrate that these modifications enable a polynomial filtering algorithm capable of determining a local form of dynamic controllability; however, we also show that there do remain some instances whose global controllability cannot yet be correctly identified by existing inference rules, leaving the true computational complexity of dynamic controllability an open problem for future research.

URL: Download the paper at this location

PRESENTATION: ( .pdf )

Bin

Monday, Dec. 7, 2009

Paper: "Constraint Satisfaction Algorithms for Graphical Games."
AUTHORS
BIB INFO.

Reason for choice:

Abstract:

URL: Download the paper at this location

PRESENTATION: ( .pdf )

Jun

Monday, Dec. 7, 2009

Paper: "Discovering chinese chess strategies through coevolutionary approaches."
C. S. Ong, H. Y. Quek, K. C. Tan and A. Tay.
In Proceedings of IEEE Symposium on Computational Intelligence and Games, pp. 360–367 (2007).

Reason for choice:

Abstract: Coevolutionary techniques have been proven to be effective in evolving solutions to many game related problems, with successful applications in many complex chess-like games like Othello, Checkers and Western Chess. This paper explores the application of coevolutionary models to learn Chinese Chess strategies. The proposed Chinese Chess engine uses alpha-beta search algorithm, quiescence search and move ordering. Three different models are studied: single-population competitive, host- parasite competitive and cooperative coevolutionary models. A modified alpha-beta algorithm is also developed for performance evaluation and an archiving mechanism is implemented to handle intransitive behaviour. Interesting traits are revealed when the coevolution models are simulated under different settings - with and without opening book. Results show that the coevolved players can perform relatively well, with the cooperative model being best for finding good players under random strategy initialization and the host-parasite model being best for the case when strategies are initialized with a good set of starting seeds.

URL: Download the paper at this location

PRESENTATION: ( .pdf )

Ryan

Wednesday, Dec. 9, 2009

Paper: "Intelligent Trading Agents for Massively Multi-player Game Economies."
J. Reeder, G. Sukthankar, M. Georgiopoulos, and G. Anagnostopoulos.
In Proceedings of the Fourth Artificial Intelligence and Interactive Digital Entertainment Conference, 2008, pp. 102-107.

Reason for choice:

Abstract: As massively multi-player gaming environments become more detailed, developing agents to populate these virtual worlds as capable non-player characters poses an increasingly complex problem. Human players in many games must achieve their objectives through financial skills such as trading and supply chain management as well as through combat and diplomacy. In this paper, we examine the problem of creating intelligent trading agents for virtual markets. Using historical data from EVE Online, a science-fiction based MMORPG, we evaluate several strategies for buying, selling, and supply chain management. We demonstrate that using reinforcement learning to determine policies based on the market microstructure gives trading agents a competitive advantage in amassing wealth. Imbuing agents with the ability to adapt their trading policies can make them more resistant to exploitation by other traders and capable of participating in virtual economies on an equal footing with humans.

URL: Download the paper at this location

PRESENTATION: ( .pdf )

Neelu

Wednesday, Dec. 9, 2009

Paper: "Analyzing the Performance of Pattern Database Heuristics."
Richard E. Korf.
In Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence (AAAI-97), Vancouver, British Columbia, July 22-26, 2007, pp. 1164-1170.

Reason for choice: Out of the ones I looked at, I think that the Korf paper on Pattern Databases was the most interesting one I read, and I can build upon what we have studied in class about searching and the paper we read for class.

Abstract: We introduce a model for predicting the performance of IDA* using pattern database heuristics, as a function of the branching factor of the problem, the solution depth, and the size of the pattern databases. While it is known that the larger the pattern database, the more efficient the search, we provide a quantitative analysis of this relationship. In particular, we show that for a single goal state, the number of nodes expanded by IDA* is a fraction of $(\log_bs+1)/s$ of the nodes expanded by a brute-force search, where $b$ is the branching factor, and $s$ is the size of the pattern database. We also show that by taking the maximum of at least two pattern databases, the number of node expansions decreases linearly with $s$ compared to a brute-force search. We compare our theoretical predictions with empirical performance data on Rubik's Cube. Our model is conservative, and overestimates the actual number of node expansions.

URL: Download the paper at this location

PRESENTATION: ( .pdf )

Paul

Friday, Dec. 11, 2009

Paper: "Motivated reinforcement learning for non-player characters in persistent computer game worlds."
Kathryn Merrick, Mary Maher.
In Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology (ACE-06).

Reason for choice: AI in shooters need not be very complex: respond quickly, shoot accurately, and maybe consider finding some cover if you happen to be taking fire. Even in something as comparatively complex as a single-player RPG, it's still possible to do a fairly complete job of preparing an AI for anything it may encounter in the game. As we move toward more complex game environments, where both users and developers contribute to the evolution of the world as a whole, it becomes ever more important for computer-controlled characters to be able to respond to unforeseen changes to the world around them. What's the point in playing a game where the characters you meet don't seem to inhabit the world you're supposed to be sharing with them?

Abstract: Massively multiplayer online computer games are played in complex, persistent virtual worlds. Over time, the landscape of these worlds evolves and changes as players create and personalise their own virtual property. In contrast, many non-player characters that populate virtual game worlds possess a fixed set of pre-programmed behaviours and lack the ability to adapt and evolve in time with their surroundings. This paper presents motivated reinforcement learning agents as a means of creating non-player characters that can both evolve and adapt. Motivated reinforcement learning agents explore their environment and learn new behaviours in response to interesting experiences, allowing them to display progressively evolving behavioural patterns. In dynamic worlds, environmental changes provide an additional source of interesting experiences triggering further learning and allowing the agents to adapt their existing behavioural patterns in time with their surroundings.

URL: Download the paper at this location

PRESENTATION: ( .pdf )

Eric

Friday, Dec. 11, 2009

Paper: "Automatically generating game tactics through evolutionary learning."
Marc Ponsen, Hector Munoz-Avila, Pieter Spronck, David W. Aha.
In AI Magazine Volume 27 Number 3 (2006).

Reason for choice:

Abstract:

URL: Download the paper at this location

PRESENTATION: ( .pdf )