Ambient intelligence opens a world of unprecedented experiences: the
interaction of people with electronic devices will change as context awareness,
natural interfaces, and ubiquitous availability of information will be
realized. Distributed applications and their processing on embedded stationary
and mobile platforms play a major role in the realization of ambient
intelligent environments. Notions as media at your fingertips, enhanced-media
experiences, and ambient atmospheres denote novel and inspiring concepts that
are aimed at realizing specific user needs and benefits such as personal
expression, social presence, and well-being that seem quite obvious from a
human perspective, but are quite hard to realize because of their intrinsic
complexity and ambiguity. Obviously, the intelligence experienced from the
interaction with ambient intelligent environments will be determined to a large
extent by the software executed by the distributed platforms embedded in the
environment, and consequently, by the algorithms that are implemented by the
In the presentation we explain about the world of ambient intelligence and define its key characteristics. Starting from the vision we identify a number of requirements for the design of ambient intelligent systems and the research challenges that result from these requirements in systems design.
Video signal processing is shifting from dedicated hardware to software implementation due to its flexibility. Digital signal processors (DSPs) for media processing are limited in its resources to enable cost efficient implementations for consumer devices. One way to achieve cost-efficient implementations is to use resource-quality scalable video algorithms (SVAs). This implies that dynamic resource adaptations result in dynamic quality changes which might affect the overall image quality. Starting from properties of SVAs, typical issues on quality including proposals for high-quality image processing will be presented.
Wireless media like IEEE 802.11a and 802.11b and IEE 802.11g are sensitive to perturbations. Packets are easily lost and the bandwidth of the medium changes rapidly. The effect on the video streamed over a wireless medium is disastrous. In this talk the effects on the video quality are shown to depend on the operational conditions: the transmission protocol and the video source. It is shown how the deployment of SNR scalable and temporal scalable video reduces the effects of the transmission perturbations. Key is a controlled adaptation of the stream at the sender. The result is that the highest possible video is transmitted over the wireless line and that under packet loss the user never perceives any artifacts, but only a reduced quality video with possibly a visible gap between two successive frames.
Increasingly, video processing in digital TVs and set-top boxes is performed in software on programmable components, such as the Philips TriMedia processor. Generally, video processing tasks show strong load fluctuations, which are due to the varying size and complexity of the video data they process. There is often a large gap between the worst-case and the average-case resource needs of a video processing task. We present an approach that allows close-to-average-case resource allocation to a single video processing task, based on asynchronous, scalable processing, and QoS adaptation. A scalable video processing task can reduce its processing needs by decreasing the quality level of processing, at the level of individual video frames. The QoS adaptation balances different QoS parameters that can be tuned, based on user-perception experiments: the quality level at which frames are processed, deadline misses, and changes in the quality level between successive frames. We model the balancing problem as a stochastic decision problem, and propose two intelligent control strategies, based on a Markov decision process and reinforcement learning, respectively. We validate our approach by means of simulation experiments, and conclude that both strategies perform close to optimum.
Wireless communications has seen a tremendous diversification in applications and growth in the number of users in the last decade. Two types of terminals have evolved: software-defined (SDR) or software-reconfigurable radios (SRR), capable of handling multi-standard multi-mode multi-service applications, and very dedicated ultra-low power radios for e.g. sensor networks or RFID tags. Focusing on the SDR/SRR path, flexibility requires these radios to adapt optimally to changing quality-of-service (QoS) demands of the user and to a dynamic environment respecting a limited amount of available energy resources. The traditional early divide & conquer design approach that led to independent design of RF front-end, digital baseband, and protocol layers has proven to result in rather high design margins and hence low energy efficiency on the average and little adaptivity to service or channel dynamics. In recent years, cross-layer and mixed-signal design concepts have been flourishing. We illustrate recent design concepts and their succesful application in the context of wireless LAN (both for single- and multiple antennas, single and multiple users) with - currently - MPEG-4 video streaming as driver application.