|Fausto Giunchiglia. Università di Trento, Italy|
|Managing Diversity in Knowledge|
|We are facing an unforeseen growth of the complexity of (data, content and) knowledge. Here we talk of complexity meaning the size, the sheer numbers, the spatial and temporal pervasiveness of knowledge, and the unpredictable dynamics of knowledge change, unknown at design time but also at run time. The obvious example is the Web and all the material, also multimedia, which is continuously made available on line. Our goal in this talk is to propose a novel approach which deals with this level of complexity and that, hopefully, will overcome some of the scalability issues shown by the existing data and knowledge representation technology. The key idea is to propose a bottom-up approach where diversity is considered as a feature which must be maintained and exploited and not as a defect that must be absorbed in some general schema. The proposed solution amounts to making a paradigm shift from the view where knowledge is mainly assembled by combining basic building blocks to a view where new knowledge is obtained by the design or run-time adaptation of existing knowledge. Typically, we will build knowledge on top of a landscape of existing highly interconnected knowledge parts. Knowledge will no longer be produced ab initio, but more and more as adaptations of other, existing knowledge parts, often performed in runtime as a result of a process of evolution. This process will not always be controlled or planned externally but induced by changes perceived in the environment in which systems are embedded. The challenge is to develop design methods and tools that enable effective design by harnessing, controlling and using the effects of emergent knowledge properties. This leads to the proposal of developing adaptive and, when necessary, self-adaptive knowledge systems and to the proposal of developing a new methodology for knowledge engineering and management, that we call "Managing Diversity in Knowledge by Adaptation". We will present and discuss the ideas above considering, as an example, the use of ontologies in the formalization of knowledge (for instance in the SemanticWeb).|
|Cynthia Breazeal. MIT Media Lab. MA. USA|
|Socially Intelligent Robots|
|No longer restricted to the factory floor or hazardous environments, autonomous robots are making their way into human environments. Although current commercial examples of domestic robots are more akin to toys, smart appliances, or supervised tools, the need to help ordinary people as capable partners and interact with them in a socially appropriate manner poses new challenges and opens new opportunities for robot applications in the home, office, school, entertainment locales, healthcare institutions, and more. Many of these applications require robots to play a long-term role in people’s daily lives. Developing robots with social and emotional intelligence is a critical step towards enabling them to be intelligent and capable in their interactions with humans, intuitive to communicate with people, able to work cooperatively with people, and able to learn quickly and effectively from natural human instruction. This talk presents recent progress in these areas and outlines several “grand challenge” problems of social robotics. Specific research projects and applications are highlighted to illustrate how robots with social capabilities are being developed to learn, assist, entertain, or otherwise benefit their human counterparts.|
|Wolfgang Wahlster. Universität des Saarlandes - DFKI, Saarbrücken, Germany|
|SmartWeb: Getting Answers on the Go|
|The appeal of being able to ask a spoken question to a mobile internet terminal and receive an audible answer immediately has been renewed by the broad availability of always-on Web access, which allows users to carry the internet in their pockets. Ideally, a multimodal dialogue system that uses the Web as its knowledge base would be able to answer a broad range of spoken questions. Practically, the size and dynamic nature of the Web and the fact that the content of most web pages is encoded in natural language makes this an extremely difficult task. Recent progress in semantic web technology is enabling innovative open-domain question answering engines that use the entire Web as their knowledge base and offer much higher retrieval precision than current web search engines. Our SmartWeb system is based on the fortunate confluence of three major research efforts that have the potential of forming the basis for the next generation of Web-based answer engines. The first effort is the semantic Web, which provides the formalisms, tools and ontologies for the explicit markup of the content of Web pages; the second effort is the development of semantic Web services, which results in a Web where programs act as autonomous agents to become the producers and consumers of information and enable automation of transactions. The third important effort is information extraction from huge volumes of rich text corpora available on the web exploiting language technology and machine learning.
SmartWeb provides a context-aware user interface, so that it can support the mobile user in different roles, e.g. as a car driver, a motor biker, a pedestrian or a sports spectator. One of the demonstrators of SmartWeb is a personal guide for the 2006 FIFA world cup in Germany, that provides mobile infotainment services to soccer fans, anywhere and anytime.
This talk presents the anatomy of SmartWeb and explains the distinguishing features of its multimodal interface and its answer engine. In addition, the talk gives an outlook on the French-German mega project Quaero and its relation to SmartWeb.
|Hector Levesque, University of Toronto, Canada|
|The Truth about Defaults|
|Virtually all of the work on defaults in AI has concentrated on default reasoning: given a theory T containing facts and defaults of some sort, we study how an ideal agent should reason with T, typically in terms of constructs like fixpoints, or partial orders, or nonmonotonic entailment relationships. In this talk, we investigate a different question: under what conditions should we consider the theory T to be true, or believed to be true, or all that is believed to be true? By building on truth in this way, we end up with a logic of defaults that is classical, that is, a logic with an ordinary monotonic notion of entailment. And yet default reasoning emerges naturally from these ideas. We will show how to characterize the default logic of Reiter and the autoepistemic logic of Moore in purely truth-theoretic terms. We will see that the variant proposed by Konolige is in fact a link between the two, and that all three fit comfortably within a single logical language, that we call O3L. Finally, we will present first steps towards a proof theory (with axioms and rules of inference) for O3L. Among other things, this allows us to present ordinary sentence-by-sentence derivations that correspond to different sorts of default reasoning. This is joint work with Gerhard Lakemeyer.|