Previous article

Next article

Software Interactions

Mireille Blay-Fornarino, Anis Charfi, David Emsellem, Anne-Marie Pinna-Dery, Michel Riveill
Laboratoire I3S, Sophia Antipolis CEDEX, France


PDF Icon
PDF Version


By using an Interaction Specification Language (ISL), interactions between components can be expressed in a language independent way. At the class level, interaction patterns specified in ISL represent models of future interaction instances when applied on some component instances.

An Interaction Server is in charge of managing the life cycle of interactions (pattern registration and instantiation, destruction of interaction, merging). It acts as a central repository that keeps the global coherency of adaptations realized on component instances. The Interaction service allows creating interactions between heterogeneous components. Noah is an implementation of this Interaction Service. It can be thought of as a dynamic aspect repository with a weaver that uses an aspect composition mechanism that insures commutable and associative adaptations.


The interaction service allows the dynamic adaptation of component-based applications. It is based on the interaction model. In this model, interactions are described in a meta-language which is independent of the component's implementation language. Interaction patterns are registered on a specific server and then instantiated on component instances. This approach allows to dynamically link components and to adapt their behaviour to their environment by using interaction instantiation and destruction at runtime. The user defines interactions at the application level. Component interactions are the basis for application connectivity. We next explain what interactions are, introduce the interaction service through an example and describe the ISL language for interaction pattern definition. Finally, we discuss the rule merging mechanism which is required when more than one interaction is applied to the same component. The merging mechanism assures commutability and transitivity when several interactions are instantiated on the same component. It is based on the ISL language and insures a consistent adaptation of the application
by several users.


"The architecture of a software system defines that system in terms of components and interactions among those components" [6].

Interactions can be found almost everywhere in the real world. We meet interactions more or less clearly expressed along the software lifecycle.

From components to interactions

A component is defined at least by the specification of its provided services and required services. The execution of a component-based application is basically to react to messages (probably events) sent by some component instances to other component instances. The behaviour of a component instance is characterised by the observable external semantic of its methods when it receives a message. This semantic is expressed by the methods return values and the messages sent to other component instances. From our point of view, adapting the component instances can be attained by modifying their behaviours for example in order to throw exceptions, send new messages, change the return values etc.

Now, let us see what an interaction is. An interaction between instances of components specifies how the component behaviour should change so that the interaction semantic is valid. Thus, a notification interaction between an Agenda component and a Display component consists in modifying the behaviour of the Agenda instance in such a way that the Display receives a message, whenever a new meeting is added or removed from the Agenda.

An interaction pattern is an abstraction of the interaction concept. It models a set of interactions that define the same behaviour modifications of the component instances they link together.

Interactions in the Analysis phase

During the analysis phase in the software lifecycle, interactions appear at the modelling level. This can be observed on several UML [1] diagrams. When the presence of interactions has a structural impact, they are expressed by associations in the class diagrams. Some interactions have an additional behavioural impact. Therefore, they appear as events and conditions in the state diagrams. However, in UML it is very difficult to express new interactions that may appear at execution time e.g. the presence or absence of some components of the execution environment.

Interactions and software architectures

Interactions also appear in the context of Architecture Description Languages (ADL). The fundamental concepts of ADL are components and connectors assembled with help of configurations [12]. A component is specified within ADL by the services it provides as well as the services it requires. These services are expressed like method signatures, messages or even variables.

"Connectors are architectural building blocks used to model interactions among components and rules that govern those interactions. Unlike components, connectors might not correspond to compilation units in implemented systems." [12]

ADL configuration languages use the architecture description in order to partially generate the component's implementation or to improve the application deployment. The newest configuration language is the CORBA Component Assembly Descriptor [17], which automates deployment. These descriptors characterize key component deployment information, such as assembly instructions and interconnection topology. Some of the configuration languages have a limited dynamic dimension that enables us to specify possible evolutions of the application as stated by Medvidodic:

"Explicit modelling of architectures is intended to support development and evolution of large and potentially long-runtime systems. It may be necessary to evolve such systems during execution. Configurations exhibit dynamism by allowing replication, insertion, removal and reconnection of architectural elements or subarchitectures."

The objective of all discussed languages is to describe the communication between the required services and provided services at different stages of the software lifecycle. None of them is explicitly geared to the dynamic management of interactions.

Interactions and Component Models

Standard component platforms such as EJB [9], CCM [14] or .NET [16] compel the developer to define interactions a priori. This is usually done at the level of component interface or business logic. In CCM, components can interact with external entities, such as the services provided by The ORB, other components or clients via a set of interfaces called ports, which define the standard mechanisms to modify the component configurations.

The OMG IDL (Interface Definition Language) has been extended to express component interconnections. A component can offer multiple interfaces, each one defining a particular point of view to interact with the component. The four kinds of component interfaces in CCM are named ports. These ports are facets, receptacles, event sources/sinks and attributes and they perform component configuration [17]. Two interaction modes are provided: facets for synchronous invocations, and event sinks for asynchronous notifications. Moreover, a component can define its required interfaces, which define how the component interacts with others: receptacles for synchronous invocations, and event sources for asynchronous notifications. Components are installed automatically when they are installed on a component server. The port mechanisms mentioned above provide interfaces to configure the components i.e. set up the object connections, subscribe/publish events, etc. It is
also possible to use the container programming model to implement interactions. The Container API simplifies the task of developing and configuring CORBA applications by providing an adaptation layer for commonly used services such as Transaction, Notification, Persistence and Security. However, the application developer still needs to statically express the component configuration. This includes how the component reacts to a given event, which event pertains to which sink etc. As a result, the connectivity and the control induced by interactions are always platform-dependent. Interaction patterns are scattered among the components and the other entities needed for their deployment e.g. stubs, proxies etc.

Interactions, Meta Programming and AOP

Aspect Oriented Programming

In an OO application, classes collaborate to achieve the application's overall goal. However, there are parts of a system that cannot be viewed as being the responsibility of only one class, they cross-cut the complete system and affect parts of many classes. Examples might be locking in a distributed application, exception handling, or logging method calls. Of course, the code that handles these parts can be added to each class separately, but that would violate the principle that each class has well-defined responsibilities. This is where Aspect Oriented Programming (AOP) [10] comes into play: AOP defines a new program construct, called an aspect, which is used to capture cross-cutting aspects of a software system in separate program entities. The application classes keep their well-defined responsibilities. Additionally, each aspect captures cross-cutting behaviour.

AOP is a method of rewriting code automatically to add or change functionality. The power of AOP lies in its ability to recognize patterns in pre-existing code and change them in multiple places with minimal work from the programmer, without changing the source code of the original program. As an Aspect Oriented Programmer, you simply specify where (and in what cases) you want changes to occur, and then specify what action you want to take. The Aspect compiler then weaves these changes into the compiled code. The original code does not need to know about any functionality the Aspect has added, and needs only be recompiled without the Aspect to regain the original functionality. Aspects can be applied to a number of classes, and can therefore add capability without forcing a class to extend or implement anything to obtain it. Aspects are particularly useful in two cases: The first is when you have multiple, unrelated classes that you want to add common
functionality to. To find an Object Oriented solution to such a problem may not be worthwhile, given the unrelated nature of the subject classes. Instead of painstakingly attempting to subclass and fulfil interfaces to meet functional requirements, Aspects can cross-cut classes that need to be changed and sew in action that should be taken. The second case occurs when you want to provide a number of features in a class, and allow each feature to either be turned on or off at compile time. This way, a user can choose which features they desire and are not penalized by code that tests to see which are on or off. Changing the configuration of the features simply requires recompiling with different Aspects applied. In the case, aspects can be thought of as compile time interactions. The substantial shortcoming of aspects is that they can not be dynamically weaved or disbanded. Unlike aspects, Interactions provide this feature. They can be turned on or off at runtime. Moreover, whereas aspects
only apply to few common concerns such as persistence, logging, authentication, security, error checking etc interactions represent an all-purpose means for dynamic adaptation. They cover standard AOP concerns and exceed that to encompass all kinds of component's adaptation.

Meta-Level Programming

The technique of meta-programming goes back to dynamic languages like CLOS and Smalltalk. Now, meta-level programming techniques are key technologies to develop adaptive and adaptable software systems.

  • A meta-program is a program that manipulates other programs or itself.
  • Meta-Object Protocols (MOP) define the interface between the meta level and the base level.

AOP has a lot of things in common with Meta Programming [3]. Both capture cross-cutting aspects of a software system in clean, controlled ways. One of the most fundamental properties of meta-level programming is that the programmer has access to the structures that represent a program, i.e. a program written in a specific language is represented at runtime in this very same language. The most popular language that implements meta-level programming concepts is CLOS, the Common Lisp Object System [3]. Its implementations are based on the MOP. The MOP can be seen as a standard interface to the CLOS interpreter. With the help of the MOP it is possible to modify the behaviour of the interpreter in a controlled way. This can be used to dynamically adapt component's behaviour to its changing environment.

CLOS provides a feature called method combination. For every method it is possible to define a method that is executed immediately before the primary method is executed (called the method's before-method), and a method that is executed after the primary method (the after-method). These methods can also be defined in subclasses.

Central to CLOS' Meta Object Protocol is the concept of meta classes. A meta class is the class of a class. That means that a class defined by the programmer is an instance of another class: the class's metaclass. The metaclass is responsible for implementing the class's protocol, e.g. the method call mechanism, the object creation process, etc. Each class in a system has its own metaclass. Metaclasses can be subclassed to create custom behaviour just like any other class.

In effect, meta-level programming can be used for dynamic component adaptation. However, it is a complex approach that requires expert knowledge. Moreover, we think that interactions should be expressed at the application level rather than at the meta level. Expressing interactions at the meta level in the form of message reception/sending makes the interaction specification unnatural.


The interaction model allows expressing interactions between components and it is in charge of the application's adaptation according to the active interactions within the system. The interaction model should fulfil the following requirements:

  • Avoid inconsistencies that may be entailed by the adaptation
  • Manage the composition of interactions
  • Insure interaction interoperability across heterogeneous components
  • Enable direct communication between the interacting components. The Central interaction management should not provoke a performance bottle neck.

Interaction properties

Interactions are basic atomic elements with the following properties:

  • An interaction pattern defines the behavioural dependencies between the component classes that it connects. The interaction instances of this interaction pattern preserve this coherency locally.
  • An interaction pattern is implementation-independent and an interaction instance can combine heterogeneous components across different platforms. For instance, a Java component may interact with a .NET component.
  • Only the component interface (in particular the provided services) may be used to describe an interaction pattern.
  • Interactions can not control properties that do not belong to the component's interface. Thus, encapsulation is not broken. The interface of an interactionbound component is not modified even though its behaviour is modified.
  • Interactions and interaction patterns can be dynamically created and destroyed during the application execution.
  • The interaction management by the component is based on a composition mechanism that ensures commutativity and associativity.

The Implementation of the Interaction Service

In order to support these properties, we provided an implementation of the Interaction Service. This implementation is called "Noah" (available on the website and it consists of the following parts:

The interaction server The interaction server enables the dynamic interaction pattern definition. It allows binding or unbinding component instances by instantiating or removing interaction patterns. It also provides methods to traverse the interaction graph. Moreover, it acts as the central repository for interaction patterns. By Noah Server we refer to the Java Interaction server. The Noah server is implemented as a Java RMI Server but it is exposed to other component platforms such as .NET by means of web services.

Interacting components These are components that have been prepared (through code instrumentation) to interact with other components. For this purpose, we provide several tools for Java based components such as EJB and RMI. We also provide appropriate tools for .NET components including local and remote components (published using the HTTP or TCP channels). The behaviour of interacting components can be dynamically modified. So, an interacting component is a dynamically adaptable component. The ability to deal with interactions is acquired by these components using some code engineering tools such as JavaGenInt and MSILGenInt.

A use case

In order to illustrate the interaction model and its implementation, we take a simple agenda application as an example. This application is made up of several component classes: A Display component class which displays messages, a Security component class which authorizes or not method calls on a given component and an Agenda component class which stores meetings.

At runtime the component instances will be bound and unbound by interactions. We define a team interaction between two Agenda components and a Display component, a notification interaction between a Display component and an Agenda component and a security interaction that associates an Agenda component with an authentication component.

Creating the binding interactions mentioned above leads to changes in the component behaviours. Thus, the security component checks each method call on the Agenda instance michelAgenda before this latter responds to that method call. Only authorised method calls are processed by the Agenda component instance. Thereby we see a notable behavioural change that has occurred on this component instance: the base behaviour was to execute all method calls, now it executes only authorized calls.

Notice that the interactions security and notification pertain to non-functional component properties, whereas the team interaction affects the component's business logic. The security interaction can also be implemented as an aspect in AOP while the team interaction can not be modelled as an aspect because it impacts on the functional part of the component Agenda.

The figure 1 shows a possible Graph of components and interactions during the execution of the application.

Figure 1: Interaction between components

Interaction pattern definition

In the agenda example, the programmer can define an interaction pattern that associates the agenda component instance with a display component instance at runtime, so that an appropriate message is shown whenever a new meeting is added to the agenda. The programmer defines interaction patterns and ships them together with the application. The interaction patterns are interdependence models that the final user can apply to the application components at runtime. The interaction patterns are defined at the level of component classes whereas the interactions or interaction instances affect instances of these component classes. The final user can in his turn create additional interaction patterns e.g. he can define a persistence pattern binding an Agenda to a database component which results in persisting the agenda meetings to the database.

Interaction patterns are specified in the Interaction Specification Language (ISL). This language is described in the next section. An interaction pattern defines at least one interaction rule. Interaction rules express the control that should be executed on the connected components. An interaction rule consists of two parts: the left side is the notifying message and the right side is the action. The semantic of an interaction rule is to rewrite method code. That is, instead of executing the default method (default behaviour), the interaction runtime should execute the actions specified in the rule's action. This applies to all component methods that match with the rule notifying message. "Match" means in this context, the same component class and the same method signature.

The following listing shows the interaction patterns mentioned above. As stated, only the team interaction pattern pertains to the functional part of the Agenda component. The other interactions are non-functional properties or services.

The interaction pattern notification can bind any component to a Display component. It contains only one interaction rule expressing that every message received by the component obj should be executed by obj and concurrently sent to the Display component.

The ISL keyword call represents the notifying message call (obj._call). It also represents the reified notifying message when it is used alone (_call) as a method parameter. The reified notifying message is an object that encapsulates the notifying method call and the call parameters.

The interaction pattern team can bind three components. It defines two interaction rules. The first interaction rule in this pattern states that the notifying message addMeeting to the Agenda instance group results in executing the message by the Agenda instance group itself. Moreover, the meeting is added to the Agenda instance member. This interaction pattern defines a collaboration relationship among the Agenda instances.

Once defined, the interaction pattern has to be registered on the Interaction Server. This latter provides a registerPattern(String islPattern) method that takes an interaction pattern as parameter. The Interaction Server acts as the central pattern repository. Interaction patterns can be retrieved or modified with the get-PatternCode(String patternName) method. The Noah Editor is a graphical user interface, which facilitates writing interaction patterns.

Creating and removing interactions

At runtime the programmer can bind or unbind component instances together using one of the interaction patterns registered on the interaction server. The Server provides the method instantiatePattern(String patternName, NoahProxy targets[]) throws Exception to create new interactions.

The interaction server creates an RMI interaction object that represents the interaction, stores it and then requests the involved component instances to take into account the interaction rules expressed by the instantiated interaction. If at least one interaction rule with the same notifying were already applied to a component instance, so it has to merge the new rules with the existing one. The merging process generates one interaction rule which is semantically equivalent to the input rules. From the next notifying call on, the component's behaviour is altered. The merging process is described later.

If we successively instantiate the team interaction on the Agenda group and the Agenda instances David, Michel and Anis, the merging process will generate the following action to the notifying message "group.addMeeting". The resulting rule means that each time a new meeting is added to the group agenda, the meeting is also be added to the agendas of the team members. The ";" is the notation of the ISL sequential operator while the symbol "//" denotes the concurrency operator. Accordingly, the actions of adding the new meeting (var 0) to the member agendas are performed in parallel.

Since the interacting components are not necessarily Java components, special proxies are needed. The proxies are java objects that provide an identical interface to the interaction server. In fact, even in the Java world the components may be quite different i.e. Enterprise Java Beans, local Java objects or remote Java objects etc. We can also have interactions that involve .NET components as well as Java components. For this reason, the instantiatePattern(String pattern, NoahProxy[] objects) method takes a NoahProxy array as second parameter. This proxy abstracts away from the technical component properties such as implementation language, platform etc.

Alternatively to this method, new interactions can be created with the help of the graphical interface Noah Editor. The user first selects the registered interaction pattern he wants to instantiate (e.g. notification). A window pops up showing the types of components that can be bound by this interaction pattern and all known running instances of these types. After that, the user chooses the instances that should interact and the tool requests the Interaction Server to create the corresponding interaction rules.

The next screenshot (figure 2) shows the Noah Editor. On the upper left corner, all registered interaction patterns are listed. The user can input a new interaction pattern. When the user selects one of the listed patterns, the tool shows the interacting components that are bound by this pattern in the "Registered object" window. The concrete interaction objects are RMI objects and they are shown in the window "object's interactions". The tool also displays the business methods of an interacting component and the interaction rules that concern each of them. Business methods are those methods that are relevant to interactions. This means they may appear on the left side of an interaction rule.

Figure 2: The Noah editor

When the user selects the business method addMeeting(string meeting) of the interacting component David (instance of Agenda) , the tool displays the ISL interaction rule that pertain to the notifying message addMeeting on the component David.

To delete an interaction, the Interaction Server provides the method removeInteraction (String identifier) throws Exception. All connected component instances (present on the left side of the interaction rule) will be requested to adapt their behaviour after the interaction (identified by the String identifier) has been destroyed. The destruction of an interaction can be performed with the Noah Editor, too.

Interacting components

Components that appear on the left side (notifying message) of an interaction rule must necessarily be instances of interacting components. However, on the right side (action) of an interaction rule, every component (even non-interacting) may appear. Interacting components must fulfil these requirements:

  • they are able to dynamically merge and unmerge interaction rules.
  • they can switch the main execution thread to a local control after the reception of a notifying message.
  • they can send messages directly to the other connected interacting components without going through the interaction server.

The execution of messages (method calls) within interacting components is quite different from the ordinary method execution. When an interacting component receives a message that turns out to be a notifying message, the interaction rule that is associated with that message is evaluated locally. This evaluation can be thought of as interpreting the rule's action. During this evaluation, calls among the involved components are direct and do not pass through the interaction server.

Several tools are shipped with the interaction server. Code Instrumentation and Code Engineering techniques are the base upon which these tools perform class modification. The classes are modified, so that they can manage interaction rules (adding, removing and merging).

The tool JavaGenInt handles this task for local and RMI Java components. It uses the Byte Code Engineering Library (BCEL) [5].

For Enterprise Java Beans the proxies take charge of interactions in a similar way to standard services like synchronisation, consistency and persistence. In JOnAs [4] the proxies are generated by the GenIC code generation tool. We modified the tool so that the generated proxies can manage interactions. To make an EJB interacting, the developer sets the attribute "value" of the element "jonas-interaction" to "true" as shown in the next code segment. This setting is specified in the deployment descriptor of the EJB application. The XML DTD of the EJB deployment descriptor has been extended appropriately.

The interaction model also supports .NET components. The MSILGenInt tool takes a .NET assembly (dll or exe file) as input and makes the classes of the assembly' interacting'. MSIL GenInt performs code engineering at the intermediate language level. MSIL stands for Microsoft Intermediate Language, which is a language similar to Java bytecode.

For all interacting components, the overhead of the interaction mechanism is the cost of a testing instruction when no interaction rule is applied to the notifying message. In the other case, the message is executed using the dynamic invocation technique in the Reflection API respectively in Java or .NET.

Interoperability and Interactions

One of the encountered problems, relates to the communication between components from different platforms. The interacting components may be local objects, Enterprise Java Beans or remote objects in java or .NET. We need to handle these components in the same manner. This is required by the interaction server when it communicates with the components. Furthermore, direct inter-component communication also rises the same requirement.

For this reason, we should find a way to handle the components uniformly and minimize differences between the various implementations. In fact, ISL parsing, ISL tree management and rule merging are common elements among the various implementations.

Our approach is to encapsulate a reference to the interacting component in a NoahProxy object that manages message sending and provides an identical interface to all interacting components. This is fully transparent to the user.


The ISL language is used to specify interaction patterns independently of the application language. It includes several operators such as the conditional operator (if . . . then . . . else . . . endif), the sequential operator (;), the concurrency operator (//), waiting operators and exception handling operator. The keyword 'this' within an interaction pattern refers to the interaction object itself. ISL recursively defines the component behaviour with the operators described below (each behaviour describes a behavioural class). The term behaviour denotes the interaction rule's action (right side). We shortly discuss next the semantics of ISL operators and constructs. A detailed description can be found in [2].

  • The method invocation operator "." denotes a method call on the receiving component. Using the key word _call in place of a method call refers to the notifying method call.
  • The assignment operator ":=" assigns the return value of a message sending behaviour (method call) or of an assignment to a variable.
  • The sequential operator ";" states that two behaviours should be executed one after the other.
  • The concurrency operator "//" states that two behaviours should be executed in parallel.
  • The waiting operator ("_X" where X is a label) states that the execution of a message, variable assignment or another waiting behaviour is blocked, until the end of the execution of a behaviour labelled by "[X]".
  • The conditional operator (if then else endif) states a conditional execution of a behaviour depending on the boolean result of the execution of another behaviour.
  • The exception handling operator (try . . . catch) permits throwing exceptions. It can only be used to express that the execution of a notifying message has been rejected. The exception thrown is then returned to the user (if it is not caught in the interaction rule).
  • The delegation operator states that an action that does not contain the triggering message of the current rule should be considered as the triggering message. There is no keyword to denote this operator. It is implicitly added during the phase of semantic analysis.
  • The wild card operator * matches all messages on the receiving component. It can be only used after the method invocation operator. This operator is used in the security pattern, so that all method calls on the Agenda are notifying.
  • The ISL keyword _call represents the notifying message call. It also represents the reified notifying message when it is used alone (_call) as a method parameter.


Interaction pattern definition involves the principle of separation of concerns. Creating interactions occurs dynamically, on behalf of several users having different point of view of the application (local view). This leads to a separation of interactions as every user expresses controls and behaviour modifications regardless of the others. Consequently, the interaction model should enforce the overall coherency (global view) of the components.

Thus, when more than one interaction is simultaneously applied to the same notifying message of a component, merging interaction rules becomes necessary. Rule merging is dynamically managed by interacting components each time a rule is added to or removed from a component instance.

The merging mechanism is specified through a finite set of merging rules and equivalence axioms based on the ISL operators. These rules and axioms are described in more detail in [2]. The rule merging fulfils the following properties:

  • The coherency of the overall interactions rules: in particular, the fusion of rules that may entail a non deterministic behaviour is rejected. Moreover, it does not provoke any non-explicit waiting especially when concurrency and sequence operators are merged together.
  • Rule merging is commutative: the order in which the rules have been added to the interacting component does not affect the behaviour of the system. The resulting behaviours are equivalent for all sequences. In fact, if we first instantiate the notification pattern and then the security pattern, we would intuitively expect the same behaviour as if we instantiated both patterns the other way around.
  • Rule merging is also associative. If we merge the team interaction with the notification interaction, then take the resulting rule and merge it with the persistence interaction, we will get the same result as if we firstly merge the persistence and notification interactions together and then add the team interaction.
  • Two rules that throw exceptions can not be merged because if an exception is thrown at runtime we do not know which rule has thrown it. We say that raising exceptions is absorbing for the merging mechanism.
  • Merging the sequential operator with the concurrency operators should not induce any chronological order that does not emanate from the input rules. When the merging may introduce an order between the actions, the waiting operator is used. The reasoning for this is the following equivalence: m; p[X] m//p_{X}

When rule merging is possible, it generates one rule that will be executed instead of the merged rules. The resulting rule has the same semantic as the merged rules. The next figure explains how an interacting component manages interaction rules and how it meets rule merging requirements.

An interacting component has an array of rules for each business method. At the first position (zero) of each array, the result of merging all rules is placed. If an interaction rule is added to a component, it will be stored in the next free slot in the array. Then, it will be merged with the rule in the first slot. The result of rule merging is again placed in the first position. In the next figure we see the rule array of the Agenda instance Anis, which pertains to the method addMeeting. The merging of the two interactions notification and security (both have the same notifying message obj.*) generates one rule which is stored in the first slot.

Figure 3: Merging rule


The interaction model allows the dynamic adaptation of components with help of interaction patterns and interaction rules. The user expresses interaction patterns at the application level using the ISL language. The merging mechanism is essential to keep the global coherency of all interactions. It ensures commutativity and associativity.

Within the actual implementation, it is possible to have interactions on Javabased components (local and RMI or EJB) or .NET-based components. The Noah interaction server can be downloaded at The interaction service has been used to dynamically manage data bases [7], to manipulate frameworks [15] and to integrate technical services [13]. Furthermore, a study is being conducted about managing the adaptation of nomadic applications with help of interactions.

The advantage of the interaction model over AOP consists in its dynamic character and the support of heterogeneous components. In fact, AOP languages such as AspectJ [11] only provide static aspect weaving on Java components. In metaprogramming, the programmer expresses the aspects in terms of meta-behaviours, which takes the programmer away from the application and makes these languages less intuitive. In addition, in these approaches the composition of aspects is based on the explicit order of aspect weaving, which is difficult to manage by the final users. The meta-programming approaches are geared towards dynamic service integration.

With regard to services, the CORBA notification service [8] is somewhat close to the interaction service. Nevertheless, within the notification service the programmer has to implement the interactions by means of notifications and event management. In addition, the absence of interactions, as well-structured entities, makes the composition of interactions quite difficult to manage by the system.

Several works around the interaction model are currently in progress. We consider extending the syntax of the ISL language and defining additional operators. For example the operator return allows giving up the control before an interaction rule is fully executed. We also consider placing interactions at other joint points1 besides message reception, even though the dynamic character of the interactions and the distributed and heterogeneous character of the components are hard constraints. Some applications are dedicated to specific domains. For example, for Expert Systems, we provided a library of interaction patterns, which can be easily used by the final users of the expert system.

We also found out that the expression of generic interaction patterns is very useful. In fact, the same type of interactions may apply to different types of interacting components. The investigation of generic interaction pattern is in course and we are evaluating the various alternatives to integrate generic interactions to the interaction server Noah.

At present, the interaction model does not precise any user rights because it was originally conceived to allow collaborative programming. We are now working on a security model, which specifies the different programmer roles and the respective user rights such as create, destroy, put and remove interactions. A further thrust of research within the RAINBOW team focuses on Human Computer Interaction (HCI). It this field, we examine how the interaction model can be used for HCI composition. With the interaction model, we can consider HCI like technical services of a business component (which contains only the application logical part). So, the HCI interface is just like the services security or persistence. In this vein, we can manage the dialog between UI and business components by means of interaction rules. Interactions bring an additional abstraction layer and provide tools which permits controlling this layer. We aim at offering more flexibility and more control over the application's adaptation by analyzing the interaction graph. We are also developing supplemental administrative tools such as the interaction network viewer.


[1] S. S. Alhir. UML in a Nutshell. O'Reilly, 1998.

[2] L. Berger. "Mise en oeuvre des interactions en environnements distribués, compilés et fortement typés: le modèle MICADO". PhD thesis, Université de Nice-Sophia Antipolis, octobre 2001.

[3] D. G. Bobrow, L. G. DeMichiel, R. P. Gabriel, S. E. Keene, G. Kiczales, and D. A. Moon. "Common lisp object system specification x3j13". In SIGPLAN Notices (Special Issue), 23, 1988.

[4] E. Cecchet and J. Marguerite. "Jonas v2.4 tutorial". Technical report, Nice University and INRIA, 2002.

[5] M. Dahm. "Byte code engineering with the bcel api", 2001.

[6] R. M. DeLine, D. Klein, T. Ross, D. Toung, and G. Zelesnik. "Abstraction for software architecture and tools to support them". IEEE Trans. Software Engineering, 21(4):314–335, April 1995.

[7] Moisan S. Dery A.M., Blay-Fornarino. "Distributed access knowledge-based system: Reified interaction service for trace and control". 3nd International Symposium on Distributed Object Applications (DOA 2001), September 2001.

[8] Object Management Group. "Notification service", omg document formal/00-06-20. Technical report, June 2000.

[9] Sun Microsystem Inc. Enterprise javabeans specification. version 1.1, January 2000.

[10] Gregor Kiczales, John Lamping, Anurag Menhdhekar, Chris Maeda, Cristina Lopes, Jean-Marc Loingtier, and John Irwin. "Aspect-oriented programming". In Mehmet Aksit and Satoshi Matsuoka, editors, Proceedings European Conference on Object-Oriented Programming, volume 1241, pages 220–242. Springer-Verlag, Berlin, Heidelberg, and New York, 1997.

[11] Lamping J. Kiczales G. "Aspectj homepage". Technical report, 2001.

[12] Nenad Medvidovic and Richard N. Taylor. "A framework for classifying and comparing architecture description languages". In M. Jazayeri and H. Schauer, editors, Proceedings of the Sixth European Software Engineering Conference (ESEC/FSE 97), pages 60–76. Springer-Verlag, 1997.

[13] Anne-Marie Dery Michel Riveill Olivier Nano, Mireille Blay-Fornarino. "An abstract model for integrating and composing services in component platforms". Seventh International Workshop on Component-Oriented Programming (in conjunction with ECOOP'2002), Malaga, Spain, June 2002.

[14] R. Marvie R. and M-C. Pellignini. "Modèles de composants, un état de l'art". Numéro spécial de L'Objet, 8(3), 2002.

[15] P. Rapicault. "Modèles et techniques pour spécifier, développer et utiliser un framework : une approche par méta-modélisation". PhD thesis, Université de Nice-Sophia Antipolis, May 2002.

[16] Jeffrey Richter. "Applied Microsoft .Net Framework Programming". Microsoft Press, 2002.

[17] Nanbor Wang, Douglas C. Schmidt, and Carlos O'Ryan. "Overview of the corba component model". Technical report, Whashington University in St Louis, 2000.



About the authors


Mireille Blay-Fornarino is Assistant professor in CNRS/I3S laboratory, University of Nice. She can be reached at See also


Anis Charfi is a PhD student at the Darmstadt University of Technology. During his master thesis within the Rainbow team he implemented the interaction model in .NET.

  David Emsellem is research ingeneer in CNRS/I3S Laboratory, University of Nice. He can be reached at

  Anne-Marie Pinna-Dery is Assistant professor in CNRS/I3S laboratory, University of Nice. She can be reached at See also

  Michel Riveill is professor of computer science at the Université de Nice - Sophia Antipolis. He heads the Rainbow project at the Laboratoire I3S Previously, he was successively Professor of Computer Science at Université de Savoie, Institut National Polytechnique de Grenoble since 1993. He can be reached at See also

Cite this article as follows: Mireille Blay-Fornarino, Anis Charfi, David Emsellem, Anne-Marie Pinna-Dery, Michel Riveill: "Software interactions", in Journal of Object Technology, vol. 3, no. 10, 2004, pp. 161-180. 2004 11/article4

Previous article

Next article