Internal Project: Lydia
As the impressive, exponential increase in hardware performance/cost ratio is expected to be continued for at least another decade, trillions of embedded systems will pervade in every aspect of our society. A sharp contrast to the potential blessings of this development is the looming software complexity crisis. At present, complex systems already require many millions of lines of code. As systems become more and more integrated, it will becomes increasingly difficult to anticipate responsive behavior at design-time. The dynamic, run-time response capabilities required to cover these issues will only add to the crisis.
It is becoming more and more apparent that under the current traditional IT paradigms this huge complexity makes it impossible to deliver software that provides system capability, correctness, and availability, at any reasonable cost. Moreover, there will not be enough skilled IT professionals to design, test, debug, install, configure, operate, diagnose, and maintain such complex systems, assuming such tasks will stay within human ability to begin with. A natural solution to mastering system complexity is to introduce autonomy, just as found in complex organisms such as humans. This approach, referred to as autonomic computing, aims at providing systems with the intelligent capability to install and run themselves, adjust to varying conditions, anticipate events, monitor their health, recover from incidents, generate contingency plans, etc., without costly and error-prone human intervention. The software complexity involved with realizing autonomy is clearly no less of an inhibiting factor than the complexity problems mentioned earlier. This applies in particular to autonomic embedded systems, given the additional constraints such as real-time performance, and energy efficiency, and the fact that such systems involve various disciplines such as physics, mechanical, electrical, and computer engineering, apart from software engineering.
A promising way of overcoming this obstacle on the road towards autonomous embedded systems is to apply a model-based approach. In the model-based approach, relevant knowledge about the system is concentrated in a compositional model. In this generic approach, so-called intelligent modes of operation at run-time, such as health prognosis, fault diagnosis, maintenance, recovery planning, as well as testability analysis at design-time, are realized through an infra-structure comprising application-independent engines (using algorithms from AI) that reason in terms of a model of the system. This approach allows software developers to focus on model development, which greatly amortizes development cost as the self-management code is automatically synthesized. Separating modeling concerns from IT-typical concerns such as model source compilation, hardware-software co-design, system interfacing, algorithm development, etc., also allows application domain experts without specific IT skills to efficiently take part in the development process. The compositionality of the model further optimizes the development process by enabling the use of reusable component libraries, possibly produced by third-party component vendors.
The Lydia project aims at the development of a systems modeling language and compiler framework that enables the automatic generation of model-based embedded software, that provides complex, expensive, possibly life-critical systems with the intelligence, such as prognostic health management and autonomic reconfiguration, needed for optimum dependable operation.
The Lydia project is an ongoing research effort within the Embedded Software Laboratory.