Intelligent objects to control and assist participants' activities

An Electronic Institution models roles and activities as they happen in a real institution.

Therefore, a Social Virtual World gives a 3D appearance to an EI specification, participants

(both humans and software agents) are represented as avatars in the virtual world and some

participant actions can be controlled and assisted by means of iObjects. The virtual world is

generated from multiagent system specification (using ISLANDER tool) as described in

(Bogdanovych, 2007).

iObjects are entities having both visualization properties and decision mechanisms, that help

to improve human participation in a VW in the following ways:

Controlling and Assisting Activities in Social Virtual Worlds 17

 Representation of execution context. They provide an effective mapping of the

institutional state (i.e. current good price in an auction) into the 3D virtual world.

Hence, it facilitates participants perception of the current state and its changes.

 User participation. To some extent iObjects are similar to real world objects in

appearance and the way to use (interact with) them. Hence, they provide an

intuitive way to participate in the institution by interacting with the iObjects

populating the virtual world. For instance, by opening a door to leave a room or by

pressing an accept button in a remote control to accept an offer from another agent

within a negotiation process.

 Enforcement of norms. iObjects collaborate with the other elements of the run time

environment in the enforcement of the institutional rules. Furthermore, they can

inform users when a norm has been violated and, optionally, they can guide a user

in order to avoid a new wrong action.

 Guide and learn of user actions. They can incorporate a knowledge base to guide

user participation (i.e. actions) inside the virtual environment. An iObject with

learning abilities may gain knowledge about user actions within the simulated

environment and after that, apply this knowledge to facilitate future user

participation.

An iObject may have several sensors (which allow to capture events from the environment)

and some effectors (which allow to act upon the environment). In the context of normative

and social virtual words, by environment we mean both the virtual world and AMELI.

AMELI is the component keeping the execution state and capable of verifying than an action

complies with the institutional rules. An iObject central component is a decision module

which determines, taking into account sensors inputs, iObject's effectors actions.

Though their sensors, iObjects can perceive events occurring at the virtual world due to

avatar actions and movements. For instance, touching sensors allow iObjects to perceive

avatars interacting with them, while proximity sensors allow them to react to avatars

presence. An iObject can also interpret gesture events which allow it to act according to

avatar gestures, for example a shaking head meaning ''I disagree'' in a e-business meeting or

a raising hand meaning ''I want to bid'' in a auction house. Another source of events for

iObjects is AMELI. That is, iObjects should be aware of changes in the execution state, in

Figure 1 named state variables. For example, changes in the interaction context within a

scene (e.g. current price of a good in an auction house), the fulfilment of a pending

obligation by a participant, or norms changes (e.g.. a door has been opened to everyone

because a scene activity has finished). When an iObject's sensor captures an event from the

environment as consequence an iObject's effector reacts to the event. It is worth mentioning

that in some cases, although the required reaction can be situated in the virtual world (e.g.

opening a door), that reaction may depend on the compliance of the avatar action with the

institutional rules. If this is the case, the iObject requests for institutional verification of the

action to AMELI by using its enforce norm effectors. Then, the door will only open if the

avatar is allowed to leave the room, which is checked by contacting AMELI. Furthermore,

iObjects can also be informed about the result, executed or failed, of the actions for which

they requested institutional verification, in this way, they can inform the user about the

result of the action in a friendly way.

Effectors act upon the virtual world changing several properties of the iObject itself: the

aspect (e.g. color, geometry, textures), the information that some types of iObjects provide

18 Web Intelligence and Intelligent Agents

(e.g. notice board) and transformation properties (e.g. position, rotation and scale). For

example, an intelligent e-business room may scale if there is an increasing number of clients

populating the space, or if it is difficult to overcome the change of its dimensions by a

scaling transformation may even replicate itself. An iObject's effectors may also maintain

AMELI informed about changes of the current state of execution, for example a door iObject

informs that an avatar has moved from one scene to another one.

Fig. 1. Intelligent object structure

Every iObject may have some of the following features: actionable, state modifier, selfconfigurable,

learnable. Actionable iObjects offer the avatar the possibility to act on them. An

example of actionable objects are remote controls or a touch screen. iObjects are state

modifiers if they may change the execution state, as for instance a door or a remote control. In

the first case, because there are avatars moving from one scene to another, and in the second

one by modifying the current winning bid within an auction. On the contrary, a brochure, a

touch screen or an item on sale are merely informative. A self-configurable iObject (e.g.., a

brochure or an item on sale) adapts its features according to changes in its environment.

Finally, a learnable iObject may discharge the electronic institution infrastructure of doing

the same norm checking several times. For example, a door iObject may learn a pattern of

norm enforcement (i.e. circumstances such as role and agent's state that let an avatar pass

through the door) so that next time it would not be necessary to query the MAS

organizational infrastructure.

As can be seen in Figure 2, the human participates by controlling an avatar in the virtual

world. Among other actions the avatar can interact with the different iObjects within the

virtual world. The user can perceive the different iObjects in the virtual world to be aware of

the execution context and use this information to decide what actions to do. Figure 2

distinguishes between iObjects at scene/institution level and participant level. The first one

correspond to the iObjects belonging to the scene infrastructure (e.g. noticeboard) or

institution infrastructure (i.e. door). Figure 3 shows a notice board iObject showing

return false">ссылка скрыта

Controlling and Assisting Activities in Social Virtual Worlds 19

information about good and its price, red salmon at 3 euros, of the current round within an

auction room.

iObjects at participant level give the user personal information about his participation in the

SVW. Hence, each user perceives their own iObjects at this level containing their

information. They are placed in the user interface but not in the virtual world. At this level,

there are three types of iObjects, namely the backpack, the information model notice board,

and the historial. The backpack keeps the user pending obligations, which are shown by

clicking with the mouse on the backpack. The information model notice board shows the

current values of the user information model attributes which depend on his role. For

instance, within an auction house buyer attributes may be his current credit and the list of

purchased goods. The historical shows a register of the user participation (e.g.. actions)

within the institution.

Fig. 2.Intelligent objects at scene and participant level

20 Web Intelligence and Intelligent Agents

Fig. 3. Noticeboard iObject at Fish auction room