research issues at the boundary of ai and...
TRANSCRIPT
Research Issues at the Boundary of AI and Robotics
Dylan Shell, Nancy Amato, and Sven Koenig
with contributions by
Ming C. Lin, Robin Murphy, Wheeler Ruml,
Reid Simmons, Peter Stone, and Carme Torras
21 June 2015
1 Introduction
The spinning of Fortune’s wheel proffered a rare opportunity earlier this year. Just days
before the annual AAAI conference in Austin, Texas, the senior program committee for
IEEE ICRA’15 met in College Station, Texas — two practically neighboring locations by
Texan standards! In a cooperative initiative, the AAAI and IEEE Robotics & Automation
Society snatched the chance to bring researchers working in Robotics, Artificial Intelligence,
and related areas together to spur cross-pollination. As part of this, the National Science
Foundation sponsored a full-day workshop focusing on the challenges at the boundary of
Artificial Intelligence and Robotics. Lately, these research areas have attracted considerable
attention in the popular press. This public exposure has evolved beyond merely generating
interest with captivating forecasts and prognostication, but instead is beginning to reflect a
society undergoing a broader awakening to a future afforded by these technologies. It is an
awakening that is part sanguine and part apprehensive. And so there can be little dispute
as to the timeliness of the effort.
The goal of the workshop was to produce a list of recommendations for professional orga-
nizations, funding agencies, and individual researchers to speed progress toward intelligent
robots. The workshop was attended by AI and robotics experts from the US and abroad, in-
cluding senior researchers, academics, and funding agency program officers (see Appendix A).
It included twenty-four (necessarily short) talks by invited speakers, a panel discussion, three
concurrent breakout sections, and an open forum in which anyone could offer their views.
Prior to the event, the organizers collected position statements on the research issues at
boundary of AI and robotics, and these statements were posted on the web as an initial
germ to provoke further thought [3].
1
This report distills and organizes the recommendations made by the workshop attendees,
summarizing ideas, observations, and common themes arising from the discussion. We rec-
ognize that the guidance provided herein may fall short of a complete roadmap, but the
direction in which it points is veracious.
The document is organized as follows. The next section identifies and presents the seven
themes condensed from the workshop in toto. This is followed by Section 3 which has
three reports on breakout discussion held as part of the afternoon session. The topics are
(3.1) challenge problems, (3.2) recommendations to funding agencies, and (3.3) activities
for a potential task force. These reports are contributions authored by the researchers
who chaired each breakout discussion. The final portion of that section (3.4) adds some
brief supplementary commentary. The penultimate Section 4 provides some higher level
observations of a more philosophical nature, and the final Section 5 concludes.
2 Common themes
We have identified a total of seven themes that emerged from the workshop and, in what
follows, a subsection is devoted to discussing each. There is some overlap between themes
and, consequently, some thought was needed before we arrived at the decomposition that
appears below. The organizational rationale is summarized as follows:
The first two themes (presented in Sections 2.1 and 2.2) are concerned with the short-
comings of the problem domains and tasks that are the province of much current work.
Both themes emphasize the need to stimulate research beyond safe insular islands of
customary practice if we are to push the field beyond local, parochial outlooks; in a
way, both themes demand an ambitious and courageous readjustment of the status
quo.
The second pair of themes (in Section 2.3 and 2.4) address the nature of the interplay
between the sensed (outer) and representational (inner) worlds of the intelligent robot.
Questions relating to these aspects are long standing and deep. But, we observe that
current research aimed at addressing these aspects head-on appear to constitute neither
mainstream AI nor mainstream robotics any longer. We urge a reorientation to rectify
this state of affairs.
The next two themes (in Section 2.5 and 2.6) consider the relationship of academic
researchers with other technical stakeholders with a particular focus on research in
industry. Both themes describe perceptions, responsibilities, and what the ensuing
changes imply for the future.
The final theme, in Section 2.7, is longer than of the other six. It analyzes the interplay
of, and flow of ideas between, researchers at the boundary of AI and robotics by
2
examining how experience and scientific knowledge can be condensed into models. It
appears that this standard process, which usually occurs naturally without needing
deliberation or explicit governance, is not operating as effectively as it might. We
therefore propose that greater attention be paid to such models and specially the
role they can play in helping coordinate research efforts. The theme provides some
pragmatic recommendations in this regard for both communities.
We have found it most convenient to have the recommendations presented in each theme
as they arise. We anticipate that this section can be used in a “random-access” manner: in
the digital version, clicking on the section numbers will transport you to the associated part
of the document.
2.1 Open-ended domains
Many of the successes in AI and robotics have come from exploring narrow domains and, in
some cases, mastery of those domains has been demonstrated. But realizing the longer-term
goal of intelligent robots will require substantial work in more open-ended settings. The
focus on carefully prescribed, specialized domains and well-delineated problems has allowed
valuable lessons to be learned, but effort and resources invested beyond a certain point
result an “overfit” to these particular classes of problems. Thereafter, failure to generalize
manifests itself in a myriad of ways, including extreme sensitivity of methods to specific
problems, integration efforts that flounder, and brittleness.
Brittleness and general deficiencies in robustness are difficult lessons to learn — difficult
in that it is both vexing and disagreeable. Also, it is far from simple to quantify aspects of
these failures in a meaningful way that uncovers principles or generalizable lessons. A further
challenge is in publishing these aspects. It is far easier to provide a once-off demonstration
as an existence proof than to convince the overworked reviewer that an investigation of a
failed system has merit.
At the workshop, RoboCup Soccer and Rescue were given as examples with a mixed record
in regards to domain openness. Both certainly involve a great deal of “messy stuff” such as
unreliable communications and limited sensing. In both domains, it is difficult for a robot
to anticipate what will happen beyond the immediate time horizon. Most agree that strong
research on multi-robot coordination has resulted from the RoboCup effort. We note that
one aspect of RoboCup that is especially worthy of imitation is the periodic adjustment and
scaling of the problem domain difficulty, helping focus attention in the right places.
A focus on more open-ended problem domains has the potential to encourage a shared
effort in a few ways:
• It demands a shift toward scenarios where performance would need to be examined
under expected circumstances not worst-case contingencies. These settings imply ac-
ceptance of some failures under dire conditions and likely preclude current notions of
3
optimality altogether. Such domains require a different notion of success in order for
a problem to be declared properly understood and solved. This redefinition is part of
a process which Kuhn [10, p. 90 ff.] has argued is necessarily part of the evolution of
scientific disciplines and becomes reflected in resultant specializations.
• Open-ended domains require that robots have a deeper understanding of the world
and their role in it, and motivates questions regarding how robots can reason about
their own capabilities. This likely includes planners that reason about the strengths
and weaknesses of the robot, producing plans that are accurate in the immediate (both
spatially and temporally) surroundings of the robot but systematically more coarse-
grained farther out. These approaches will likely require new non-standard planning
metrics.
• Since it is impossible to have all the domain information available upfront, this vul-
nerability may have to be embraced and seen as an opportunity. One idea is to have
AI techniques that exploit how robots can model and engage other agents actively (eg,
asking for help). Some anticipate that this will require reexamination of fundamental
assumptions about autonomy and practical robots.
2.2 AI for social robots and societally relevant robots
The last point in the preceding discussion begins to illustrate how some researchers are
thinking about robots as participants within a wider social fabric. This poses challenges
involving (i) a reframing of the goal of autonomy in intelligent robots, (ii) a more detailed
examination, from a computational perspective, of the rich social substrate in which such
devices will operate; (iii) new and substantial challenges in modeling, simulation, reason-
ing, and planning in these contexts, and (iv) a demand for leadership and circumspection
regarding the place for such technologies if they are to be embedded in our society. Some of
these aspects are tackled more directly by the robotics community (specifically human-robot
interaction researchers) and others aspects are examined more visibly by the AI community.
These elements are clearly at the boundary between the communities and deserve to be taken
up with equal engagement by all. Moreover, they are topics where a greater flow of ideas
between the two communities should be stimulated.
The goal of intelligent robots that can represent, reason about, and act amongst a web
of social actors raises several open research questions lying at the intersection of multiple
topics in AI and robotics. Some questions of immediate interest, including to funders, are:
• Within the topic of robot-human communication, how does one to produce natural user
interfaces? This includes research in gesture recognition and generation, synchroniza-
tion in interactions, and use of physiological sensors to measure human state variables,
4
as well as broader sensing and assessment for adaptation of behavior in terms of those
estimates.
• How can theories, logics, and AI studies of multi-agent systems be used to understand
general large-scale robot-human teaming? Are there ways reason about trust in such
interactions and can practicable techniques to measure, elicit, maintain, and repair
trust be deployed on robots? There are also opportunities to explore deceptive agents.
• How can robots reason about intent? Are there effective ways to recognize intent in
humans, and ways to communicate it clearly?
If robots of the future will partake in a rich social context, one must recognise the degree
to which technologies have already brought about changes to this context. One must also
examine how they will continue to bring about change. The IEEE Computer Society’s 2022
Report [4] provides several predictions of the near future, setting forth a vision its authors
term “seamless intelligence.” The vision includes increased automation, ubiquitous access
to information, and pervasive human networking. Although the report limits its treatment
of robots to a few specific and quite narrow application domains, it nevertheless provides
predictions of the technological milieu robots will inhabit. The report foresees sensing,
networking, and computational technologies that could render services to robots and which,
among other things, could provide information to improve the capabilities of robots as social
actors. It is difficult to imagine the vision of seamless intelligence without socially adept
robots playing a major role.
With development of transformative technologies comes the duty of responsible steward-
ship. AI and robotics researchers should recognize that development of technology is a
human enterprise that involves more than technical choices: it is an activity that is entwined
with value judgements [6]. We can choose the community we serve and researchers would
do well to weigh these considerations deliberately, surely opting to strive for values of in-
clusiveness and egalitarianism. We might, as proposed by the most senior of the attendees,
think about semi-automated coaching robots for the elderly. Another example, proposed as
a challenge problem, is a robot that could telephone 911 in an emergency and be helpful to
parties at both ends for the duration of the call. (This problem has many aspects identified
as desirable for challenge problems, as discussed below.) As researchers, if we so choose, the
grand challenges we aim to achieve could be as bold and as far-reaching (not to mention, as
measurable) as the millennium development goals [20].
2.3 Where do the symbols come from?
Machine perception and, more specifically, computer vision are immensely challenging as-
pects of intelligence. At least amongst researchers operating at the boundary of AI and
robotics, no statement is likely to garner more widespread agreement than the preceding
5
one. The perception problem is foundational to most research that aims to produce in-
telligent robots, yet much of the problem remains poorly understood. It remains unclear
whether or not we have found the right abstractions to deal with the complexity of the
“buzzing noisy world.”1 Yet, the concern has had as long a history as any in AI — in 1947
Pitts and McCulloch [17] considered the question of what it meant for a computational
system to know universals and, operationally, how it might map unstructured signals into
conceptual categories or forms. The question is perhaps unique in the way it cuts across
fundamental issues of intelligence: it runs from eminently practical concerns (what to do
with the sensors we have), through interests in representation (how to denote and signify
information), ultimately to epistemological and theoretical issues.
Three perspectives on the signal-to-symbol problem were represented at the workshop:
• The first, although not explicitly described, appears to reflect the position mainly of
roboticists within the tradition of control. Here, the question is largely evaded by con-
tinuous time controllers operating purely at the signal level itself. Discrete structures
are certainly employed extensively within in hybrid-systems control community but,
so far as we are aware, those structures are provided as models a priori and it is not
generally the case that they are determined at runtime from sensory input.
• The second perspective argued that learning —necessarily employed in a staggered, or
hierarchical way— should be used to turn signals into symbols. Here the need for a
diversity of representations was stressed to capture metric and topological information,
and also (distinctly) objects, actions, and agents. It was observed that although some
techniques show promise at providing a middle-level component, notably deep learning
methods, the current successes involve operations as opaque black-boxes rather than
purveyors of understanding.
• The third perspective was communicated by researchers who are actively bridging the
computer vision and robotics communities. The challenge of how to reliably infer sym-
bols from sensor streams was reframed as the scene understanding problem, recognizing
that different variants are needed for different tasks such as robot navigation, manip-
ulation, home service, and so on. Demonstrations of au courant semantic labelling
algorithms, applied to images representative of the sort a robot might capture, proved
to be impressive. But the cleft between this valuable work and the standard stream
of percepts model (cf. Russell and Norvig [19]) remains wide. (Additional specific ob-
servations regarding opportunities for this research at boundary of AI and robotics
appear in next theme.)
The impression gained from these discussions is that galvanizing the respective commu-
nities around perceptual problems is not only valuable but an absolute necessity. Challenge
1This colorful phrase was used by Ben Kuipers in his talk at the workshop; its visceral quality is apt.
6
problems like the Semantic Robot Vision Challenge (SRVC) [8] certainly deserve higher pro-
file in the respective communities.
2.4 Perception with a purpose
The preceding theme is not intended to give the impression that research on perception has
been at an impasse, it is worth observing that significant conceptual ground has been covered:
none of the workshop participants advocated the notion of “pure vision” [5], formerly a
prevailing paradigm. In contrast, the suggestions made during the workshop fall broadly
under the broad rubric of situated perception, where emphasis is placed on both on the
specific needs of robot and the particular advantages afforded by action.
• Substantial progress in computer vision in recent years has come from using image
repositories available on the internet as a springboard. But it must be emphasized that
these are not representative of the view from a robot. Images captured from on-board
a robot often have a great deal of lighting, viewpoint variability. The environments
in which robots are expected to operate may have significant clutter. Also, occlusion
is frequent and can be grave, as it is common for a robot’s own body to occlude any
objects being manipulated. An immediate consequence is that performance statistics
reported in a paper may fail to reflect the algorithm’s behavior once employed on a
robot.
• For robots, sensing must provide understanding that useful for selection or generation
of action. Computer vision that is purposeful in this way can exploit task structure and
context in order to sharpen its operation — for example, algorithms may choose to trade
off generality to gain performance and robustness in the situations that matter most.
Methods with awareness of their operating conditions, potentially possessing the ability
to select from a suite of models adapted to different environments and circumstances,
should be able to use context to focus attention and invest computational effort wisely.
But clearly important questions remain as to how the contextual information and task
structure should represented and used. (It should recognized that part of what we are
referring to as context need not not be static: knowledge of the regularity in human
behavior is important for perceiving and understanding human activity.2)
• Vision for robots is coupled to action in the other direction too. Action can resolve
sensing ambiguities by repositioning sensors. Active vision, perceptual planning, and
more broadly the idea of task-oriented situational awareness remain areas clearly at
the intersection of AI and robotics which much promise. Also, because robots can be
systematic about motions they make, work on calibration and identification of sensor
2Katia Sycara gave a compelling example of how robots and humans see completely different things: one
sees a man with a walker, the other an obstacle with 〈distance d, angle θ〉.
7
system parameters is possible, which could help make robots more robust (e.g., as
lenses deteriorate). This involves weakening common assumptions regarding what the
agent (or agent’s designer) know about the form of arriving data.
Although the preceding points summarize discussion of computer vision, most are appli-
cable to perception more widely and not only computer vision per se. Many robots have
cameras that produce video streams for processing (as distinct from single frames) but there
are also many opportunities for working with 3D sensors and active sensors too.
2.5 Have vehicle—will add AI later
One observation, made separately in different contexts by multiple discussants, is the fallacy
among some practitioners that unmanned vehicles and robots can be supplemented with
AI capabilities late in the design process. Murphy [13] presents (material discussed at the
workshop and appearing in print subsequently) a study of engineering team composition in
the DARPA Robotics Challenge which shows an underrepresentation of AI specialists. Any
approach that glues together functionality developed in stove-pipes is unlikely to produce
robust intelligent behavior, especially when the appropriate choice of interfaces between the
pieces remains an open problem. These views echo Section 2.1 because the call for more
open-ended problems emphasizes the importance of general AI from the outset. But it
constitutes a broader observation about perceptions of AI by engineers and how, as methods
become established, sub-communities tend to separate into a narrower niche areas. This is
a process for which there is certainly historical precedent in AI, and perspectives may vary
as to whether it is problematic or simply natural.
In addition to open-ended domains, a better or, at least different, distillation of the body
of knowledge into a canon of principles may be needed. Some aspects of this are mentioned
in Section 2.7, but activities relating to clearer communication of the role of AI in practical,
fielded systems is beyond the scope of the discussion. The 2012 report of Defense Science
Board Task Force [2] is a recent and, unfortunately, all too rare exemplar in this regard.
2.6 Involvement of the growing industry in intelligent robots
The last few years have seen big changes in the perceived market readiness of many of the
core technologies that underpin intelligent robots and related devices. Commercial interests
have grown into an accelerating industrial race with technology companies as well as vehicle
(e.g., automobiles) and equipment manufacturers (e.g., construction, logics, and warehousing
equipment) deciding that profits are within reach. This shifting landscape has important
ramifications for research programs as they will need to adapt in order to realize symbiotic
relationships between industry, academia, and government research laboratories. In this
theme, we make two observations. The first is centered on how research agendas will need to
8
account for the new state of affairs, and the second is concerned with the shift in educational
role universities will now have to play.
A cooperative exchange that recognizes the different strengths and weaknesses of univer-
sities and companies should be able to produce sustainable relationships that are mutually
beneficial. But, acknowledging that the resources at the disposal of a professor are not of
the same scale as those an industrial effort may bring to bear, the academic researcher must
select research problems wisely. For example, it might be foolhardy to establish a research
program to build driverless automobiles at this point. While opportunities certainly can be
found in this area, they require shrewd choice of research question by the investigator. This
state of affairs is largely a success story for the field.
One pattern of interaction between academia and industry is based on the idea that the
latter extend and harden methods shown by the former to have promise, typically evidenced
in publications or through wide adoption. Although a simplification, it is mostly true that in
industry an established method would usually be favored over another with a more limited
record of success. But academic values and the publication process tend to apply pressure
away from established ideas. As an example, in 2015 it is extraordinarily challenging to
publish papers on advances to visual SLAM problems because part of the community con-
siders this a solved problem. (It is even harder to break into this research area given the
strength of the efforts of well-established groups.) Yet, specialists in this area are currently
among the most highly sought-after candidates in industry. On one hand is the academy,
producing researchers that prize novelty and innovation, and on the other is a view that sees
them as source of skilled talent. Crucially, developing talent in particular specialist subareas
correlates with the ability to conduct fundable, publishable academic research on topics in
that subarea. And, although generally compatible, the tension between these views becomes
more significant when industry is burgeoning.
Most academic researchers focus on extending what is possible, and new results accrue
in concise publications. But leaders in the field must play a crucial role in synthesizing
material to solidify the body of knowledge into a whole. The activity of writing a textbook,
for example, is far more than summarizing a wake of furious activity, smoothing over the
fits-and-starts described in short, rushed, piecemeal publications of questionable pedagogical
value. Rather, scholarly synthesis is a responsibility and it has vast long term importance. If
carried out with an eye to the future needs of industry, it has the potential to transform how
and when technologies impact the population at large. An instructive and relevant example
is the pioneering text of Mead and Conway [12]. That book precipitated a fundamental
restructuring of academic programs, and ultimately provided the integrated circuit industry
with the manpower which has driven Moore’s law for the last thirty five years. Certainly
comparisons can be drawn that identify similarities between the nascent intelligent robot
industry and that of chip manufacture some decades ago.
Recent years have seen development of a number PhD and MS programs specializing in
9
robotics as universities position themselves to attract students captivated by robots and to
send their graduates on to companies in need of technical expertise. The curricular choices for
those programs will have an important effect on the boundaries of AI and robotics, including
repercussions for how the two areas are seen in industry (cf. Section 2.5), and longer-term
implications for the products that are delivered.
2.7 The role of models and the model-building process
2.7.1 A coarse caricature
Both robotics and AI researchers make simplifications in order to produce problems that may
be attacked feasibly, but the two communities have different histories and idiosyncrasies in
how they approach this sort of necessary simplification. Generally, the roboticist puts greater
emphasis on the artifact or particular system, and must show considerable enterprise in con-
solidating otherwise distinct components, including the connection of representations, plans,
and models to sensors and actuators. The roboticist is more likely than an AI researcher to
be gratified by an end-to-end system that exhibits some specific skill, even when that skill is
fairly narrow. Frequently, the roboticist will investigate constraints imposed on the problem
by current hardware and the world, whether these have to do with information, actuation,
dynamics, or integration of the parts. Even so it is not uncommon for some select pieces to be
addressed and others to be simplified (e.g., fiducial markers to facilitate sensing). An AI re-
searcher is more likely to be comfortable simplifying most of these complications and treating
them as implementation details. This occurs through modeling assumptions inherent in the
formulation of the problem (e.g., sensor uncertainty is stochastic) — although the veracity
of the model may be given competitively little scrutiny. Many times the AI researcher aims
for higher-level algorithms, software, or architectures that, if not entirely general purpose,
are not dependent on the particular task at hand. Respectively, and respectfully, each of
the positions may be criticized from the opposite perspective as being exceedingly ad hoc or
lacking realism. Like any good compromise that leaves all parties slightly dissatisfied, this
tension is probably healthy; there certainly are enough parts of the problem for everyone.
2.7.2 Building and using models
The preceding caricature sets the stage for discussion of the word “model” because it seems
to be used differently by different researchers. Although perhaps unsurprising,3 it resulted in
some of the speakers at the workshop talking past one another. We devote some attention to
various uses of the word here because one of the suggestions we make below is that models
are important units of exchange between roboticists and researchers in AI.
3As Raphael [18] writes, “The term ‘model’ has been grossly overworked, and it does not seem to have
any generally agreed-upon definition [9].”
10
The following are three uses of the word “model” as witnessed at the workshop. They are
not mutually-exclusive definitions nor did all AI researchers use the term strictly in one form
and roboticists another; there was a considerable mix but any single Procrustean definition
conflates otherwise useful nuances.
(A) Models as predictors and classifiers of phenomena One consequence of ubiq-
uitous sensing is the production of abundant data, some of it in volumes which can be
quite taxing for current methods.4 Data-driven models have enabled several successes
and their advantages (being realistic digests of phenomena, allowing for uncertainty
intrinsically, etc.) are widely recognized. Nevertheless, one crucial disadvantage is that
their usefulness is proportionate to the similarly of circumstances involved: specifically,
good predictions are made in conditions most similar to those previously encountered.
As models, they are opaque boxes for capturing phenomenological regularity, rather
than the actual processes that operate underneath.
(B) Models as structured representations of the world Models for understanding
organizational and causal relationships (especially those that prioritize parsimony and
malleability) may make predictions with greater error than data-driven models, but
are useful because they express semantic content, or deeper structure, or both. In
this category, one might cast the net widely to include analytical models that describe
governing laws symbolically (e.g., fundamental physical processes via equations), tra-
ditional AI models (e.g., the situation calculus), and social theories (e.g., the theory of
joint intentions), and the like. Some of these representations are used at design-time
so that these structures, or factors they entail, become implicit within the robot sys-
tem. Others are used within robots at run-time, usually in a fairly straight-forward
way — often for prediction, illustrating some of the overlap with definition A.
(C) Models as characterizations of classes of problem domains Certain problem
settings become recognized as well-defined and important generalizations of common
situations. A successful model of such circumstances, one widely adopted, becomes the
focal point of intense and sustained research effort. Examples include CSPs, MDPs,
POMDPs, and the (kinematic) motion planning problem. The model in this sense
crystallizes axioms and precepts surrounding a worthwhile class of problem domains
by laying out modeling assumptions (e.g., a preference structure over solutions is to be
expressed via a scalar function). Once established, models in this form help mediate
the organization and unification of knowledge. Application-oriented experts cast their
problems into that form and gain understanding about their particular setting. And
problem-oriented experts invent solution methods for the model and generalizations to
it. The model forms the nexus of this effort. If these aspects represent the strengths,
4Ming Lin’s talk included the rime “Data, data everywhere...”
11
the primary weakness is that models of this type become metaphors for the problem
that are so effective that they become conflated with the problem itself.5
Many roboticists are motivated by the problem having their robots perform new, challeng-
ing, and useful tasks. They seek ways of reasoning about a variety of complex phenomena
(e.g., how tissue deforms when interacting with a manipulator, or the motion and dynamics
of cloth, etc.) and often end up adapting or optimizing established physical models in the
sense of definition B (but, sometimes, also definition A). An example might be employing,
say, a continuum mechanical treatment of non-rigid objects. Several discussants proposed
that integration and learning of both type-A models and type-B models in some unifying
way might be a useful and challenging direction for future research. They envisioned that
part of the learning process itself would incorporate aspects of verification and validation of
the models, and this could entail tackling inverse problems and parameter estimation (e.g.,
for material properties). Whether this is realized not, roboticists do use existing models to
bring ideas from outside the discipline to bear on their application of interest. In a sense,
the model then is a succinct encapsulation of expertise and principles — making this use of
models similar in spirit to definition C above, where one is interested in the transfer of ideas
between two areas.
2.7.3 Inadequate permeability at the boundary
The goals that so much robotics and AI research have in common, while being sufficient to
direct activity in the same general direction, appear to be inadequate in maintaining a vig-
orous circulation of ideas between the two communities. As evidence, we give two examples
raised during the workshop that can fairly be thought of as expressions of frustration, one
from each ‘side’:
1. Several researchers made statements to the effect that, generally, roboticists are unfa-
miliar with current work in the AI literature, so they frequently reinvent techniques
and ofttimes in less principled ways. Examples of current AI techniques that seem
to be relatively little known include innovations in search methods like the use of
abstraction-based guidance, as well as various notions of suboptimal search (bounded,
contract, utility-based), etc.
2. A few researchers expressed disappointment in ostensibly impressive AI techniques that
are not practicable in real life applications, failing to operate in real-time or to manage
complexity. It was claimed that many techniques seem to be inadequately dynamic,
or make unreasonable assumptions regarding the relevant factors of the world that can
be modeled.5George Konidaris put it as “AI researchers sometimes make the mistake of thinking their models are
real.” Or as Arturo Rosenblueth and Norbert Wiener remarked “The price of metaphor is eternal vigilance”
(attributed in Lewontin [11]).
12
Interpreting the preceding statements as warning signs, we might ask what pragmatic
steps can be taken to spur on greater sharing of ideas?
2.7.4 Models as shared medium
One solution is encouraging more widespread of models (in definition C) as a form of lingua
franca to connect the two communities. A primary form of exchange across the boundary
between AI and robotics should be findings and new ideas placed in the context of shared
models. In the words used above, many roboticists are akin to “application-oriented experts”
and, broadly speaking, many AI researchers fit the mould of “problem-oriented experts.”
The models themselves (in the form of definition C) are more than a common currency of
exchange, but should actually become first-class elements of an intellectual discourse focusing
on the diversity of models available, their improvement, and interplay. And finally, progress
toward common goals could then be assessed in the context of such models, as they come to
represent established micro-theories of intelligent sensing, planing, and action.
This would involve some change to the current way of doing things. Roboticists, aware of
several existing models, might begin to tackle a new problem by adopting the most suitable
one, and casting the particulars of their problem into the appropriate formulation. They
would then leverage state of the art algorithms, potentially using an existing implementation
as a starting point. On discovering that the model (or method, or implementation) proved to
be inadequate, the specific deficiencies would be documented and a new model (or addendum
to an existent one) would be proposed. The AI researcher, rather than having a myopic
view focused only on certain models as they currently stand, remains attentive to their
shortcomings, both operationally and in terms of verisimilitude. Indeed, the AI researcher
also becomes actively engaged in the question of where and how the models fail. It is
conceivable that they could also be involved in formalizing this body of knowledge at a
higher-level forming, say, an ontology.
If the idea of type-C models as shared vocabulary were adopted, the vitality of research at
the boundary of AI and robotics could then be assessed by looking at the catalog of models
and approaches, and the development of these over time. An examination of how insights
from either side have brought changes and new ideas to this catalog would lead to valuable
meta-level lessons; indeed, it is in this sense that models then would have become first-class
topics of the research discourse.
3 Breakout sessions
The afternoon portion of the workshop had three breakout sessions that were run concur-
rently. The organizers selected the discussion topics beforehand by picking questions that
could be tackled usefully through group discussion, that would benefit from brainstorming
collectively, and could complement each other. In the run-up to the workshop, two suitable
13
invitees were identified to chair each topic. The workshop itself operated in a self-organized
fashion; the attendees picked topics and contributed their views to whichever session they
saw fit. After a discussion phase lasting approximately forty-five minutes, one chair from
each session summarized the discussion in a presentation before the entire audience.
The chairs prepared a more complete written report on their section’s discussion after
the workshop. In the next three sections we include those reports. This is followed by
an additional section providing brief supplementary commentary to those reports, aiming
mainly to situate them within the themes identified above and to include some additional
comments made outside of the breakout sessions.
3.1 Grand Challenges at the Boundary of AI and Robotics
Breakout chairs and authors: Peter Stone, University of Texas at Austin
Ming C. Lin, University of North Carolina at Chapel Hill
The breakout session on challenge problems for AI and robotics brought together about 20
interested participants in a very lively discussion. Though the discussion was free-form, this
report organizes the topics discussed into two main categories. First, we outline the general
features of a good challenge problem, specifically for unifying AI and robotics. Next, several
suggestions for possible grand challenge problems were proposed. Below we summarize the
issues raised in an attempt to reflect the overall diversity of opinions expressed.
With regards to features, the following ideas were put forth.
• Performance metrics. A good challenge needs clear metrics by which to compare
alternate approaches and measure progress.
• Data sets. It was suggested that a good challenge ought to either provide or lead
to good benchmark data sets that can be used offline by the community. However,
the group also recognized that there may be a tension between static data sets and
challenges that require embodiment and interactivity.
• Open world. From an AI perspective, a good challenge ought to include the re-
quirement that the robot or system be able to react to new, dynamic, or unexpected
situations or concepts that are not fully specified in advance. For example, a challenge
including vision should not specify all the types of items that will be in the environment
in advance.
• Real time. From a robotics perspective, a good challenge ought to require real-time
operation in the real world.
14
• Long duration. Rather than turning on a robot to do a specific task, it was suggested
that a good challenge would require long-term autonomy.
• Standardized platform and software. It was suggested that a good challenge ought
to build upon platforms and domains that are already working robustly. On the other
hand, it was recognized that too much reliance on standards can cause the community
to become stuck in local optima or conform to existing development, thereby losing
creative outlets to explore out of the box thinking.
• Practical task. It was considered a positive feature of a challenge if it is focused on
a task that people actually want done. On the other hand, there was some concern
raised that a challenge for the academic community ought not replicate problems likely
to be solved by industry, either now or in the near future.
• Learning. Challenges that include learning of common sense for reasoning, learning
from humans and examples, and/or learning and knowledge discovery from big data
were all considered of interest. Several people expressed detecting and adjusting to
human preferences, emotion, and behaviors as a stimulating and appealing challenge
component.
• Related to both AI and robotics. During the report about the breakout session
back to the whole workshop, the point was raised that to be relevant to this commu-
nity, a challenge ought to push the state of the art in both robotics and AI, not just one
or the other. That is, it should require both a physical level, including control, sens-
ing, locomotion, and/or manipulation, and a cognitive/AI level, including reasoning,
planning, and/or learning.
Perhaps because the discussions were interleaved, not all of the concrete challenges that
were suggested exhibit all of the above features (or perhaps because it is very difficult to
meet all of them simultaneously!). With this caveat, we list the suggested challenges next.
• Robot scavenger hunt. One possible challenge is to provide robots with a list of items
to find or tasks to execute over the course of several hours, especially in a crowded
environment, such as a conference, a market place, etc. For example, the robot could
be asked to take a picture of a busy scene; it could be asked to deliver an item to a
person; or it could be asked to answer some questions. This challenge would require
long-term autonomy, navigation, and possibly dexterous manipulation. It would also
be open-world. A twist on this challenge would be to structure it like the TV show
“The Amazing Race” in which a robot and person team up, and only get the next clue
after meeting the previous challenge.
• Human-robot interaction. While there has already been some research in individ-
ual labs on human-robot interaction for furniture assembly, it could potentially be
15
codified into a challenge with metrics. This challenge would particularly emphasize
intent and activity recognition, as well as manipulation. Similarly, collaborative device
‘disassembly’, for example to remove batteries and recycle components, was proposed.
• Robot soccer with multiple robots and persons. While RoboCup has been a
long-standing, very successful robot-soccer challenge, including an element of human-
robot cooperation was suggested as a good future direction.
• Adjustable autonomy healthcare devices. Healthcare is an area where robotics/AI
can make a profound and long-lasting impact. For example, creating a wheelchair that
can both accept low-level human control and operate autonomously was put forth as a
potential challenge. A particular focus could be on understanding when and how the
robot should cede control to the user.
• Personal and household robots. Finally, codifying a challenge around personal
and household tasks is another possible area where pervasive impact of AI/robotics
technology can be visibly appreciated. It was recognized that the RoboCup@Home
competition has already established a successful competition in this domain.
Due to time constraints, the discussion was conducted in the spirit of brainstorming, and
not all of these challenges hold equal promise, nor is the list above exhaustive. Some other
possible grand challenges mentioned throughout the talks in the workshop include:
• Field Robots (underwater, ground and aerial vehicles)
• Service Robots (building/tour/museum guides, rehabilitation/assistive support, edu-
cation, manufacturing, automation)
• Social Robots (pets, toys, games, companions, etc.)
There is a general consensus that with the recent simultaneous advances in both robotics
and AI, the potential impact from solving many grand challenge problems is high. The great-
est advances will likely lie in areas where the environment may be unstructured, changing,
and unpredictable, while the robots are interacting and collaborating with humans whose in-
tent and behaviors are not known in advance. We hope that both the challenge features and
specific examples of possible grand challenges put forth in this report will contribute to the
ongoing community-wide effort to advance research at the intersection of AI and robotics.
3.2 Recommendations to Funding Agencies
Breakout chairs and authors: Reid Simmons, Carnegie Mellon University
Carme Torras, Polytechnical University of Catalunya (UPC)
16
Approximately 20 people participated in a breakout session on “Recommendations to Fund-
ing Agencies,” moderated by Reid Simmons and Carme Torras. The group discussed how
funding agencies (largely directed towards NSF) could more effectively support research at
the boundary of AI and Robotics. Discussion was aided by the participation of Lynne Parker,
Division Director of CISE.
A common suggestion, made only somewhat in jest, was the need for more money to
support research. There was significant concern, especially amongst researchers from the US,
that both the number and sizes of available grant opportunities were insufficient, especially
for projects that involve testing on actual robot hardware. It was suggested that more
support from industry might help alleviate some of the pressure on funding agencies, but it
was also pointed out that there is a tension between the type of basic research promoted by
funding agencies (especially NSF) and the shorter-term goals of industry.
A significant portion of the discussion related to challenge- and competition-based ini-
tiatives. Such initiatives, most notably the various DARPA robotics challenges, but also
challenges by NASA and competitions by independent organizations, such as Robocup and
X-Prize, seek to catalyze research in a particular area by having very specific, but very chal-
lenging, goals and opening up participation to any interested party. Many of the challenges,
such as Robocup, have both a long-term vision (e.g., beating a world class human soccer
team by 2050) and shorter-term, more easily attainable goals although the boundaries are
usually pushed as competitors get close to achieving them. Challenges and competitions
remain very popular, both with researchers and the public.
Some such challenges, such as DARPA’s, provide partial support to (some of the) partici-
pants, but most rely on the participants finding their own funding to cover development and
travel costs. Most of the challenges (Robocup is a prominent exception) provide monetary
prizes to the winner, but even so the prize is usually only a small fraction of development
costs, and most competitors get nothing. Incentive for participating is often the “bragging
rights” received for doing well and the expectation of further funding, either from govern-
ment sources or, more increasingly, from industry. It was suggested that having challenges
supported and/or run by industry might be a way to provide more support for participants
and alleviate pressure on the funding agencies, although it is not clear why industry would
see this as an attractive proposition.
It was also noted that such challenges tend to be difficult for single PI, or even single
universities, to support. Thus, structuring such challenges to encourage inter-disciplinary
and multi-university participation is important for their long-term viability. In addition,
expanding the participation in this way will promote the type of collaboration that is im-
portant in robotics. It was also noted that the challenges need to incorporate elements of
basic research, but that a tension exists between doing basic research that could move the
17
field forward and the pressure of developing a robust, working system, especially given the
often tight deadlines imposed by the challenge. This is especially true for challenges at the
boundary of AI and Robotics, where the hardware is often not available until late in the
timeline, and so the opportunity to develop and deploy innovative AI techniques may be
limited.
This desire to encourage interdisciplinary teams of researchers was further emphasized
in regards to basic research funding opportunities. In was pointed out that research that
combines AI and Robotics is inherently interdisciplinary, but that forming such teams is
often difficult, and not always appreciated by reviewers. It was suggested that calls for
proposal should encourage, or even specify, the need for multi-disciplinary teams, and that
reviewers should be solicited who understand and appreciate the need for, and difficulty of
coordinating, multi-disciplinary research.
There was significant discussion on how, and how strongly, funding agencies should go to-
wards steering research in particular areas. As discussed above, challenges and competitions
strongly push the field in certain directions. In addition, specificity in calls for proposal, and
the use of techniques such as NSF’s “dear colleague” letters to request proposals in certain
areas, can have significant impact in moving the field towards some desired objective. But,
once again, it was pointed out the tension that exists between top-down calls and bottom-up
efforts, which can lead to unanticipated innovations or even new areas of research. In the
end, it was felt that some combination of top-down and bottom-up solicitation is needed,
but there was no real consensus on the best split.
Finally, there was some support for calls with a large focus on commercialization and
collaboration with industry, exemplified by the EU Horizon 2020 effort. This viewpoint was
not universally accepted, however, as others felt that funding agencies should focus more on
basic research.
An interesting suggestion was to create an umbrella network organization that would
work towards facilitating communications and research in AI and Robotics. The analogy
was given to the EU robotics organization (euRobotics AISBL: http://www.eu-robotics.
net/), which includes, and is partially funded by industry (in the EU, the SPARC initiative
has e700M funding from the Commission for 2014–2020, and triple that amount from indus-
try). While this organization was purely academic for its first dozen years (EURON network:
http://www.euron.org/), since 2012 it has been joint with industry. A similar umbrella
organization for AI and Robotics could hold yearly meetings to bring together researchers,
government and industry to discuss progress, goals and issues in the field. It could generate
roadmaps for research, development and deployment. It could help support non-technical
issues critical to the field, such as social, regulatory and ethical issues associated with AI
and Robotics (including advocating for basic research in these areas). It could also serve
as a repository of educational materials, disseminate videos, and give out awards, such as a
best PhD award and technology transfer award. In addition, the organization could prepare
18
and present briefings to government and industry leaders, and lobby for increased research
funding and associated legislation in existing or new areas of research.
Lastly, there was discussion of infrastructure needs. Research in AI and Robotics typically
requires a significant investment (of both time and money) in both hardware and software
infrastructure. While several open-source software repositories exist (such as ROS, OMPL,
OpenCV), it was emphasized that much more could be done to both expand such efforts and
integrate them (such as developing ROS extensions to support AI algorithms and techniques).
Funding agencies can play a role here by providing infrastructure grants to develop, expand,
document, maintain and publicize existing and new repositories. Repositories of data sets
and benchmark problems could also be supported most are now made available only through
the graces of the researchers (although, NSF’s recent requirement of a data management plan
for every proposal is a good step in this direction).
In a similar vein, there was discussion of developing national robotics test beds, where
researchers could develop and test approaches either on site or remotely. The test beds,
which could focus on areas such as navigation, manipulation, manufacturing, service robots,
would be hosted at universities or national labs, with funding available to both develop and
maintain the facilities (including documentation, technical support staff, remote operations,
and creation of high-fidelity, physics-based simulators with interfaces identical to the actual
hardware). While the hurdles in making such facilities usable by large numbers of researchers
is formidable (especially for remote operations), it was felt that it could have tremendous
impact in making high-end robotics widely available, along with the development of share-
able software resources. The analogy was given to early nationally-funded semiconductor
fabrication facilities, where researchers could submit designs and receive fabricated chips in
only a few weeks. This test bed both enabled researchers with limited means to do research
in this area and also helped solidify commonly accepted standards and interfaces.
Overall, it was generally agreed that government/industry collaboration is an important
direction for future funding, although one has to be mindful of the tension between the push
for short-term accomplishments and longer-term basic research (similarly with challenges and
competitions). The tension between top-down, government-initiated specification of research
goals and bottom-up, researcher-driven initiatives was avidly discussed, with consensus being
only that some combination was desirable. Lastly, the group discussed at some length the
creation of an umbrella organization, composed of researchers, industry and government,
that could serve as a focal point and repository for research in AI and Robotics, as well as
a vehicle for funding and research in both technical and non-technical areas, such as social
and ethical issues related to AI and Robotics.
3.3 A Task Force for AI and Robotics
Breakout chairs and authors: Robin Murphy, Texas A&M University
Wheeler Ruml, University of New Hampshire
19
Participants in this break-out session were charged with developing activities that would
solidify the various efforts underway to more tightly link research in AI and in robotics. The
central activity that was identified was the formation of a task force to serve as the steward
of the nascent community at the intersection of AI and robotics. While it is nice to have
the AI-robotics mailing list (via Google Groups), participants felt that some organization
needs to actively coordinate and spur efforts to keep the two communities interacting. Such
efforts are well beyond the scope of any single individual and yet fall outside the purview of
existing organizational structures.
The task force would:
1. Identify opportunities in the broadest sense of the word: funding, events, publications
and tools fostering collaborations.
2. Proactively create opportunities, such as workshops that rotate among conferences,
funding to support invited speakers or “best of the other conference” sessions, and
meaningful competitions.
3. Cheerlead and suggest awards for exemplars of work at the intersection of AI and
robotics. Encourage parallel publications in AI and Robotics venues.
4. Generate metrics for progress, mechanisms for joint interactions and joint review.
5. Curate a collection of educational and tutorial materials, identifying publications of
interest to researchers interested in both AI and robotics. Publish edited volumes
targeting specific topics at the borders.
6. Assist funding agencies in monitoring competitions and challenges. Refresh roadmap
annually and produce a candid “state of the intersection” report every two years,
including conferences and activities.
Participants envisioned the task force being relatively small, with a balanced set of mem-
bers selected by a consortium of agencies, professional organizations, and conferences. These
could include NSF/NRI, AAAI, IEEE RAS, IEEE PAMI, SMC, DARS, WAFR, ICAPS,
SoCS, AAMAS, and ICML.
The immediate goal is the formation of the task force and holding a meeting within 12
months, with the goal of bootstrapping an AI-Robotics roadmap. NSF CCC was suggested
as a possible source of resources to support the effort.
20
3.4 Supplementary comments
3.4.1 Grand Challenges at the Boundary of AI and Robotics
The breakout session on the topic of challenge problems for unifying AI and robotics attracted
a great deal of energy and gusto. It was chaired by Ming Lin and Peter Stone, whose full
report, in Section 3.1, shows that the discussion touched on topics in most of the seven
preceding themes. Many of the talks earlier in the day made reference to RoboCup, DARPA
Challenges (Grand, Urban, Rescue), as well as a variety of other competitions, some of which
are affiliated with AAAI. The discussion of these efforts was balanced in recognizing both
the successes and the shortfalls; certainly challenge problems and competitions are topics
that researchers have strong feelings about. The particular value of the breakout report is
that it proposes several challenges that clearly complement those already in existence, and
each is likely to invoke innovation a direction that will to help bridge AI and robotics efforts.
The notion of open-ended domains, mentioned in Section 2.1, comes through very strongly
in the suggested problems, and is echoed or mentioned in several of the features of good
challenge problems that the report outlines. Similarly, human-robot interaction aspects (see
Section 2.2) are well represented. The breakout discussion reflects the fact that deliberation
and careful thought are needed in defining challenge problems. Those involved in picking the
setting, choosing the rules, and deciding how they are defined, ultimately determine whether
the outcome will be overwhelmingly an integrative engineering exercise, something which
drives new scientific discovery, or both. (The importance of this laid out explicitly in the
DSB Task Force Report [2, pg. 9] mentioned in Section 2.5.)
It should be clearly stated that designing challenge problems is not the same thing as
devising robot competitions, although it is possible that the two can coexist. As an ad-
ditional refinement, it is worth clearly demarcating events intended to serve primarily for
outreach from those events intended for research purposes, although again both can coex-
ist. Performance evaluation may be an additional orthogonal dimension. The testing and
evaluation of autonomous systems remains a challenging problem in and over its own right
(despite frequent discussion of this aspect, the field has yet to produce scalable benchmarks
comparable to those available in other areas of computing, cf. [1]).
3.4.2 Recommendations to Funding Agencies
In addition to providing a very useful international context, the report authored by Reid
Simmons and Carme Torras (Section 3.2) artfully describes the balancing act involved in
guiding research outcomes through incentives, managing high-level priorities, and adjusting
to changing conditions. In describing the potential interplay with industry, we feel that it
is very effectively complemented by the theme presented in Section 2.6. It also underscores
the importance of societally relevant robots (part of the theme in Section 2.2) albeit from a
different angle.
21
3.4.3 A Task Force for AI and Robotics
The report authored by Robin Murphy and Wheeler Ruml (in Section 3.3) reflects that
fact that the breakout session they chaired had the most definite and clearly delineated
of the three topics. The report they produced has great value in precisely specifying the
responsibilities and actions of a Task Force which, if established and supported, could play a
vital role in expediently bridging the AI and robotics research communities and maintaining
the connection in the future.
4 Perspectives on the interrelationship between AI and
robotics
This section contains some broader perspectives on the relationship between the two com-
munities, specifically looking past the technical and technological considerations that have
been the prime focus in the preceding discussion.
4.1 Differing philosophical traditions
Some aspects of the preceding observations about the status quo might be understood more
fully by examining, at a deeper, philosophical level, differences in thinking revealed by the
workshop attendees. It is necessary to generalize and simplify somewhat, but there could be
said to have been two basic philosophical positions represented. Especially in Sections 2.7.1
and 2.7.2, we see a distinction between those who treat the mathematical model as vera-
cious, imbuing it with some significance as a characterization of the essence of the problem,
and those who see them as flawed constructs in which a series of gradual improvements
is inevitable. In the first, there is classical idealism and perhaps these, predominantly, AI
researchers would be comfortable in being placed in the tradition of neoplatonism. The sec-
ond set are mostly roboticists who, in addition to being practical, could be characterized as
aligned with the school of pragmatism (in sense of C. S. Peirce). This is a distinction quite
apart from that between “scruffies” and “neats”[19, p. 25] widely used.
Section 2.7.4 provides a specific recommendation on how the communities might coordinate
their efforts through a vocabulary of shared models. That proposal reflects an asymmetry
between theoretical and experimental scientists going back at least as far as the famously
acrimonious interactions of John Flamsteed (the first Astronomer Royal) and Isaac Newton
(then president of the Royal Society). The differing emphases and perspectives placed on the
significance of experimental findings by, on the one hand, those who conduct experiments
and, on the other, those who operate predominantly on models derived from experiments
can be stark. In terminology relevant to the topic at hand, Oberkampf and Trucano [16]
write: “A competitive and frequently adversarial relationship (at least in the US) has often
22
existed between computationalists (code users and code writers) and experimentalists, which
has led to a lack of cooperation between the two groups.” While Oberkampf and Trucano
are writing about computational fluid-dynamics, the warning is clear. Emphasizing the
synergistic nature of all parties is one aspect a future Task Force for AI and Robotics (see
Section 3.3) might assist with. Given these comments about potential friction between
researchers, earlier comments about the difficulties involved in publishing findings about
brittle behavior (cf. Section 2.1), the current scarcity of articles that present negative results,
and the challenges inherent in producing repeatable robot experiments, there is certainly still
some ground to cover before the ideas outlined in Section 2.7.4 are realized.
Mention has already been made of the fact that the two disciplines have idiosyncratic
views. We believe that, ultimately, these reflect underlying values and cultures that differ
and distinctions in philosophy necessarily color judgements made about many aspects of the
research. Many of the processes of scientific research (including dissemination of results via
publication, the processes of peer-review and tenure evaluation, etc.) depend on judgements
tied to social influenced considerations.6 Thus, when the values of one group differ from those
of another, these differences can come to act as a wedge to bifurcate disciplines that, at least
to an external observer, seem to be the selfsame. Of course, this is a simplification, the
culture, value systems, and content of the work all evolve together. We merely emphasize
this perspective here in order to help give a more complete picture.
4.2 Deja vu — history repeats itself
The suggestions outlined in Sections 2.1 and 2.2 promote investigation of general open-ended
problem domains. Implicit within the discussion in those sections is a critique of much of
the progress and improved performance that has been realized by efforts with a narrow
focus. This critique reflects the views of several workshop discussants who expressed their
disappointment in established methods that failed to be useful to their problems of study.
The pattern of this discussion has appeared before. For example, Feigenbaum [7] writes: “A
view of existing problem solving programs would suggest, as common sense would also, that
there is a kind of ‘law of nature’ operating that relates problem solving generality (breadth
of applicability) inversely to power (solution success, efficiency, etc.), and power directly to
specificity (task-specific information).” Some years later came the suggestion of Nilsson [14]
that, even if there is considerable utility in addressing specialized problems, there exists a
point of diminishing returns beyond which the lessons gained from specific narrow domains
will fail to be informative for intelligence more generally. The reasons and incentives that
Nilsson identified as explaining the field’s intensely narrow focus twenty years ago are just
as applicable today.7 This is a fact which is rather telling.
6Bernardine Dias phrased elements of this as an important need for “belonging.”7Section 4 of “Eye on the Prize”[14] is especially prescient.
23
In its short history, the field of Artificial Intelligence has had several instances of subareas
and specialized application domains effectively separating into autonomous sub-disciplines,
each with their own research community, conferences, journals, and organizations. The
obvious examples are computer vision (see Section 2.4), statistical machine learning for data
analytics, expert systems, speech recognition and natural language processing (NLP), and
so forth. With regard to robotics, other traditions, especially mechanical engineering and
the control theoretic perspectives, have certainly given the area a self-identity in which AI
as seen as only one part (cf. Section 2.5). Today there is still considerable overlap and
a sense of common history, for example, both AAAI’15 and ICRA’15 conferences hosted
celebrations of 50 years since SRI’s Shakey project. On the basis of special publication
tracks and competitions, it is probably fair to say that the AI community works harder to
court the attention of roboticists than vice versa. But robotics differs from the preceding
instances (e.g., NLP, say) in its vast span: the extent to which it touches many of the
problems underlying the grand pursuit of intelligence is quite unique. Thus, we believe, the
effort required to cultivate a constructive and forward-looking relationship between these
two communities is warranted.
5 Conclusion
Most attendees of the workshop are engaged daily in active research to help make intelligent
robots a reality. But the meeting was an opportunity for a broader appraisal, for an assess-
ment of the difficult problems that fall between the gaps of the various respective efforts, for
an examination and commentary of the social aspects of the research communities, and for
longer term thinking. We are hopeful that attending to the challenges identified herein will
loosen those sticking points which are impeding headway.
Commenting on separation of robotics and AI at the time (2006), the inimitable Nils
Nilsson made the rather droll remark:
“I think that they’ll come back together. I think that robotics is a very interest-
ing vehicle for understanding how to integrate many many different aspects of
AI.” [15]
Acknowledgements
This workshop is made possible by generous support from the National Science Foundation
under award #1349355, and is overseen by the advisory board of Ron Alterovitz, Maxim
Likhachev, and Sven Koenig.
The authors also wish to thank Marco Morales and Sam Rodriguez whose notes were a
useful supplement to our own. Several of the specific ideas in Section 2.7 had their genesis
24
in conversations of one author (Shell) had with Chris Amato and George Konidaris. He is
especially grateful to them for their time and ideas.
25
Appendix A Workshop Invitees
A full list of people, numbering sixty people, who responded to the public calls for interest is
available on the workshop website [3]. The following is a list of researchers and stakeholders
who gave invited talks, participated in the panel discussion, or led breakout discussions:
Ruzena Bajcsy, University of California, Berkeley.
Michael Beetz, University of Bremen.
Alcia Casals, Polytechnical University of Catalunya (UPC).
Bernardine Dias, Carnegie Mellon University.
Maria Gini, University of Minnesota.
Yi Guo, Stevens Institute of Technology.
Luca Iocchi, Sapienza University of Rome, Italy.
George Konidaris, Duke University.
Ben Knott, AFOSR.
Jana Koseca, George Mason University.
Ben Kuipers, University of Michigan.
Max Likhachev, Carnegie Mellon University.
Ming C. Lin, University of North Carolina at Chapel Hill.
Robin Murphy, Texas A&M University.
Lynne Parker, National Science Foundation.
Wheeler Ruml, University of New Hampshire.
Matthias Scheutz, Tufts University.
Reid Simmons, Carnegie Mellon University.
Peter Stone, University of Texas at Austin.
Gita Sukthankar, University of Central Florida.
Katia Sycara, Carnegie Mellon University.
Dawn Tilbury, University of Michigan.
Carme Torras, Polytechnical University of Catalunya (UPC).
Manuela Veloso, Carnegie Mellon University.
Brian Williams, M.I.T.
26
References
[1] Standard Performance Evaluation Corporation (SPEC). http://www.spec.org/.
[2] Defense Science Board Task Force Report: The Role of Autonomy in DoD Systems.
Technical Report DTIC–#ADA566864, July 2012. http://www.acq.osd.mil/dsb/
reports/AutonomyReport.pdf.
[3] Website for the NSF Sponsored Workshop: Research Issues at the Boundary of AI and
Robotics, 2015. http://robotics.cs.tamu.edu/nsfboundaryws/.
[4] H. Alkhatib, P. Faraboschi, E. Frachtenberg, H. Kasahara, D. Lange, P. Laplante,
A. Merchant, D. Milojicic, and K. Schwan. IEEE CS 2022 report. Technical report,
IEEE Computer Society, Feb. 2014. http://www.computer.org/web/computingnow/
2022-Report.
[5] P. S. Churchland, V. S. Ramachandran, and T. J. Sejnowski. A Critique of Pure
Vision. In C. Koch and J. Davis, editors, Large-Scale Neuronal Theories of the Brain.
MIT Press, 1994.
[6] A. Feenberg. Questioning Technology. Routledge, London, U.K., Apr. 1999.
[7] E. A. Feigenbaum. Artificial Intelligence: Themes in the Second Decade. In Infor-
mation Processing ’68, pages 1008–1022, Amsterdam, 1969. North Holland Publishing
Company. Available as Stanford AI Project Memo No. 67, Aug. 15, 1968.
[8] S. Helmer, D. Meger, P. Viswanathan, S. McCann, M. Dockrey, P. Fazli, T. Southey,
M. Muja, M. Joya, J. J. Little, D. G. Lowe, and A. K. Mackworth. Semantic robot
vision challenge: Current state and future directions. CoRR, abs/0908.2656, 2009.
http://arxiv.org/abs/0908.2656.
[9] B. H. Kazemier and D. Vuysje, editors. The Concept and Role of the Model in Mathe-
matics and Natural and Social Sciences. Gordon and Breach Science Publishers, New
York, 1963. Cited in Raphael [18].
[10] T. S. Kuhn. The Structure of Scientific Revolutions. University of Chicago Press,
Chicago, third edition, 1996.
[11] R. Lewontin. In the beginning was the word. Science, 291:1263–1264, 2001.
[12] C. Mead and L. Conway. Introduction to VLSI Systems. Addison-Wesley, Reading, MA,
1980.
[13] R. R. Murphy. Meta-analysis of Autonomy at the DARPA Robotics Challenge Trials.
Journal of Field Robotics, 32(2):189–191, Mar. 2015.
27
[14] N. J. Nilsson. Eye on the Prize. AI Magazine, 16(2), Summer 1995.
[15] N. J. Nilsson, 15 July 2006. Interviewed by Matt Peddie and made available as
part of the AI oral history project, http://projects.csail.mit.edu/films/aifilms/
AAAIfootage/done/Nilsson.mp4.
[16] W. L. Oberkampf and T. G. Trucano. Verification and validation in computational fluid
dynamics. Progress in Aerospace Sciences, 38:209–272, 2002.
[17] W. Pitts and W. McCulloch. How we know universals — the perception of auditory
and visual forms. The bulletin of mathematical biophysics, 9(3):127–147, Sept. 1947.
[18] B. Raphael. SIR: A Computer Program for Semantic Information Retrieval. PhD thesis,
Massachusetts Institute of Technology, June 1964. Available as AITR-220.
[19] S. J. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Prentice-Hall,
Inc., Upper Saddle River, NJ, third edition, 2008.
[20] United Nations. Millennium Development Goals, 2015. http://www.un.org/
millenniumgoals/.
28