830 resultados para Ubiquitous and pervasive computing
Resumo:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
Resumo:
Nowadays, computing is migrating from traditional high performance and distributed computing to pervasive and utility computing based on heterogeneous networks and clients. The current trend suggests that future IT services will rely on distributed resources and on fast communication of heterogeneous contents. The success of this new range of services is directly linked to the effectiveness of the infrastructure in delivering them. The communication infrastructure will be the aggregation of different technologies even though the current trend suggests the emergence of single IP based transport service. Optical networking is a key technology to answer the increasing requests for dynamic bandwidth allocation and configure multiple topologies over the same physical layer infrastructure, optical networks today are still “far” from accessible from directly configure and offer network services and need to be enriched with more “user oriented” functionalities. However, current Control Plane architectures only facilitate efficient end-to-end connectivity provisioning and certainly cannot meet future network service requirements, e.g. the coordinated control of resources. The overall objective of this work is to provide the network with the improved usability and accessibility of the services provided by the Optical Network. More precisely, the definition of a service-oriented architecture is the enable technology to allow user applications to gain benefit of advanced services over an underlying dynamic optical layer. The definition of a service oriented networking architecture based on advanced optical network technologies facilitates users and applications access to abstracted levels of information regarding offered advanced network services. This thesis faces the problem to define a Service Oriented Architecture and its relevant building blocks, protocols and languages. In particular, this work has been focused on the use of the SIP protocol as a inter-layers signalling protocol which defines the Session Plane in conjunction with the Network Resource Description language. On the other hand, an advantage optical network must accommodate high data bandwidth with different granularities. Currently, two main technologies are emerging promoting the development of the future optical transport network, Optical Burst and Packet Switching. Both technologies respectively promise to provide all optical burst or packet switching instead of the current circuit switching. However, the electronic domain is still present in the scheduler forwarding and routing decision. Because of the high optics transmission frequency the burst or packet scheduler faces a difficult challenge, consequentially, high performance and time focused design of both memory and forwarding logic is need. This open issue has been faced in this thesis proposing an high efficiently implementation of burst and packet scheduler. The main novelty of the proposed implementation is that the scheduling problem has turned into simple calculation of a min/max function and the function complexity is almost independent of on the traffic conditions.
Resumo:
Many research fields are pushing the engineering of large-scale, mobile, and open systems towards the adoption of techniques inspired by self-organisation: pervasive computing, but also distributed artificial intelligence, multi-agent systems, social networks, peer-topeer and grid architectures exploit adaptive techniques to make global system properties emerge in spite of the unpredictability of interactions and behaviour. Such a trend is visible also in coordination models and languages, whenever a coordination infrastructure needs to cope with managing interactions in highly dynamic and unpredictable environments. As a consequence, self-organisation can be regarded as a feasible metaphor to define a radically new conceptual coordination framework. The resulting framework defines a novel coordination paradigm, called self-organising coordination, based on the idea of spreading coordination media over the network, and charge them with services to manage interactions based on local criteria, resulting in the emergence of desired and fruitful global coordination properties of the system. Features like topology, locality, time-reactiveness, and stochastic behaviour play a key role in both the definition of such a conceptual framework and the consequent development of self-organising coordination services. According to this framework, the thesis presents several self-organising coordination techniques developed during the PhD course, mainly concerning data distribution in tuplespace-based coordination systems. Some of these techniques have been also implemented in ReSpecT, a coordination language for tuple spaces, based on logic tuples and reactions to events occurring in a tuple space. In addition, the key role played by simulation and formal verification has been investigated, leading to analysing how automatic verification techniques like probabilistic model checking can be exploited in order to formally prove the emergence of desired behaviours when dealing with coordination approaches based on self-organisation. To this end, a concrete case study is presented and discussed.
Resumo:
Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.
Resumo:
Starting from pervasive computing paradigm, we want to face the new system's requirements, concerning, mainly, self-organisation, situatedness and adaptivity, through the definition and execution of nature-inspired patterns. They are extracted by the study of dynamics in biological systems and we consider for their implementation the biochemical tuple spaces model. In particular, the aim of the thesis is to design and realize a first biochemical extension of TuCSoN (technology based on tuple spaces model) and, then, to verify its capabilities by means of a proper case study, that deals with local self-organisation and competition of services in an open and highly-dynamic environment.
Resumo:
Multifunctional Structures (MFS) represent one of the most promising disruptive technologies in the space industry. The possibility to merge spacecraft primary and secondary structures as well as attitude control, power management and onboard computing functions is expected to allow for mass, volume and integration effort savings. Additionally, this will bring the modular construction of spacecraft to a whole new level, by making the development and integration of spacecraft modules, or building blocks, leaner, reducing lead times from commissioning to launch from the current 3-6 years down to the order of 10 months, as foreseen by the latest Operationally Responsive Space (ORS) initiatives. Several basic functionalities have been integrated and tested in specimens of various natures over the last two decades. However, a more integrated, system-level approach was yet to be developed. The activity reported in this thesis was focused on the system-level approach to multifunctional structures for spacecraft, namely in the context of nano- and micro-satellites. This thesis documents the work undertaken in the context of the MFS program promoted by the European Space Agency under the Technology Readiness Program (TRP): a feasibility study, including specimens manufacturing and testing. The work sequence covered a state of the art review, with particular attention to traditional modular architectures implemented in ALMASat-1 and ALMASat-EO satellites, and requirements definition, followed by the development of a modular multi-purpose nano-spacecraft concept, and finally by the design, integration and testing of integrated MFS specimens. The approach for the integration of several critical functionalities into nano-spacecraft modules was validated and the overall performance of the system was verified through relevant functional and environmental testing at University of Bologna and University of Southampton laboratories.
Resumo:
Nowadays, data handling and data analysis in High Energy Physics requires a vast amount of computational power and storage. In particular, the world-wide LHC Com- puting Grid (LCG), an infrastructure and pool of services developed and deployed by a ample community of physicists and computer scientists, has demonstrated to be a game changer in the efficiency of data analyses during Run-I at the LHC, playing a crucial role in the Higgs boson discovery. Recently, the Cloud computing paradigm is emerging and reaching a considerable adoption level by many different scientific organizations and not only. Cloud allows to access and utilize not-owned large computing resources shared among many scientific communities. Considering the challenging requirements of LHC physics in Run-II and beyond, the LHC computing community is interested in exploring Clouds and see whether they can provide a complementary approach - or even a valid alternative - to the existing technological solutions based on Grid. In the LHC community, several experiments have been adopting Cloud approaches, and in particular the experience of the CMS experiment is of relevance to this thesis. The LHC Run-II has just started, and Cloud-based solutions are already in production for CMS. However, other approaches of Cloud usage are being thought of and are at the prototype level, as the work done in this thesis. This effort is of paramount importance to be able to equip CMS with the capability to elastically and flexibly access and utilize the computing resources needed to face the challenges of Run-III and Run-IV. The main purpose of this thesis is to present forefront Cloud approaches that allow the CMS experiment to extend to on-demand resources dynamically allocated as needed. Moreover, a direct access to Cloud resources is presented as suitable use case to face up with the CMS experiment needs. Chapter 1 presents an overview of High Energy Physics at the LHC and of the CMS experience in Run-I, as well as preparation for Run-II. Chapter 2 describes the current CMS Computing Model, and Chapter 3 provides Cloud approaches pursued and used within the CMS Collaboration. Chapter 4 and Chapter 5 discuss the original and forefront work done in this thesis to develop and test working prototypes of elastic extensions of CMS computing resources on Clouds, and HEP Computing “as a Service”. The impact of such work on a benchmark CMS physics use-cases is also demonstrated.
Resumo:
Allergies to animals are behind the house-dust mite allergy the most frequent cause for indoor allergic respiratory symptoms. In case of persistent allergen exposure symptoms like rhinitis, itch of the skin or asthma are usually not perceived intensively and, thus, can not assigned to an animal or an animal source. In many cases animal allergies are based on a perennial allergen exposure. Although most likely all animals may be the cause of a respiratory allergy, cats, dogs, and horses are the most frequent elicitors. The diagnosis of an allergy to an animal needs to be set with due care, since it often causes emotional reactions, diverse conflicts, but also lack of understanding. Rarer are allergies to fungi even though fungi as allergen sources since decades belong to the differential diagnosis in respiratory allergies particularly in case of late summer asthma. Fungi are ubiquitous and present indoors as well as outdoors. Unfortunately the field of fungal allergy is not well explored and diagnostic possibilities are limited. The most promising therapy in both allergy to animals and fungi would be complete avoiding of contact with the respective allergen source. Indeed many preventive recommendations are given; however, realization is often not successful. In selected cases specific immunotherapy for both animal and fungal allergies is a potential therapeutic option.
Resumo:
BACKGROUND: Although yawning is a ubiquitous and phylogenetically old phenomenon, its origin and purpose remain unclear. The study aimed at testing the widely held hypothesis that yawning is triggered by drowsiness and brings about a reversal or suspension of the process of falling asleep. METHODS: Subjects complaining of excessive sleepiness were spontaneously yawning while trying to stay awake in a quiet and darkened room. Changes in their electroencephalogram (EEG) and heart rate variability (HRV) associated with yawning were compared to changes associated with isolated voluntary body movements. Special care was taken to remove eye blink- and movement-artefacts from the recorded signals. RESULTS: Yawns were preceded and followed by a significantly greater delta activity in EEG than movements (p< or =0.008). After yawning, alpha rhythms were attenuated, decelerated, and shifted towards central brain regions (p< or =0.01), whereas after movements, they were attenuated and accelerated (p<0.02). A significant transient increase of HRV occurred after the onset of yawning and movements, which was followed by a significant slow decrease peaking 17s after onset (p<0.0001). No difference in HRV changes was found between yawns and movements. CONCLUSIONS: Yawning occurred during periods with increased drowsiness and sleep pressure, but was not followed by a measurable increase of the arousal level of the brain. It was neither triggered nor followed by a specific autonomic activation. Our results therefore confirm that yawns occur due to sleepiness, but do not provide evidence for an arousing effect of yawning.
Resumo:
Portfolio use in writing studies contexts is becoming ubiquitous and, as such, portfolios are in danger of being rendered meaningless and thus require that we more fully theorize and historicize portfolios. To this end, I examine portfolios: both the standardized portfolio used for assessment purposes and the personalized portfolio used for entering the job market. I take a critical look at portfolios as a form of technology and acknowledge some of the dangers of blindly using portfolios for gaining employment in the current economic structure of fast capitalism. As educators in the writing studies fields, it is paramount that instructors have a critical awareness of the consequences of portfolio creation on students as designers, lifelong learners, and citizens of a larger society. I argue that a better understanding of the pedagogical implications for portfolio use is imperative before implementing them in the classroom, and that a social-epistemic approach provides a valuable rethinking of portfolio use for assessment purposes. Further, I argue for the notions of meditation and transformation to be added alongside collection, selection, and reflection because they enable portfolio designers and evaluators alike to thoughtfully consider new ways of meaning-making and innovation. Also important and included with meditation and transformation is the understanding that students are ideologically positioned in the educational system. For them to begin recognizing their situatedness is a step toward becoming designers of change. The portfolio can be a site for that change, and a way for them to document their own learning and ways of making meaning over a lifetime.
Resumo:
There is ample evidence of a longstanding and pervasive discourse positioning students, and engineering students in particular, as “bad writers.” This is a discourse perpetuated within the academy, the workplace, and society at large. But what are the effects of this discourse? Are students aware faculty harbor the belief students can’t write? Is student writing or confidence in their writing influenced by the negative tone of the discourse? This dissertation attempts to demonstrate that a discourse disparaging student writing exists among faculty, across disciplines, but particularly within the engineering disciplines, as well as to identify the reach of that discourse through the deployment of two attitudinal surveys—one for students, across disciplines, at Michigan Technological University and one for faculty, across disciplines at universities and colleges both within the United States and internationally. This project seeks to contribute to a more accurate and productive discourse about engineering students, and more broadly, all students, as writers—one that focuses on competencies rather than incompetence, one that encourages faculty to find new ways to characterize students as writers, and encourages faculty to recognize the limits of the utility of practitioner lore.
Resumo:
BACKGROUND: Engineered nanoparticles are becoming increasingly ubiquitous and their toxicological effects on human health, as well as on the ecosystem, have become a concern. Since initial contact with nanoparticles occurs at the epithelium in the lungs (or skin, or eyes), in vitro cell studies with nanoparticles require dose-controlled systems for delivery of nanoparticles to epithelial cells cultured at the air-liquid interface. RESULTS: A novel air-liquid interface cell exposure system (ALICE) for nanoparticles in liquids is presented and validated. The ALICE generates a dense cloud of droplets with a vibrating membrane nebulizer and utilizes combined cloud settling and single particle sedimentation for fast (~10 min; entire exposure), repeatable (<12%), low-stress and efficient delivery of nanoparticles, or dissolved substances, to cells cultured at the air-liquid interface. Validation with various types of nanoparticles (Au, ZnO and carbon black nanoparticles) and solutes (such as NaCl) showed that the ALICE provided spatially uniform deposition (<1.6% variability) and had no adverse effect on the viability of a widely used alveolar human epithelial-like cell line (A549). The cell deposited dose can be controlled with a quartz crystal microbalance (QCM) over a dynamic range of at least 0.02-200 mug/cm(2). The cell-specific deposition efficiency is currently limited to 0.072 (7.2% for two commercially available 6-er transwell plates), but a deposition efficiency of up to 0.57 (57%) is possible for better cell coverage of the exposure chamber. Dose-response measurements with ZnO nanoparticles (0.3-8.5 mug/cm(2)) showed significant differences in mRNA expression of pro-inflammatory (IL-8) and oxidative stress (HO-1) markers when comparing submerged and air-liquid interface exposures. Both exposure methods showed no cellular response below 1 mug/cm(2 )ZnO, which indicates that ZnO nanoparticles are not toxic at occupationally allowed exposure levels. CONCLUSION: The ALICE is a useful tool for dose-controlled nanoparticle (or solute) exposure of cells at the air-liquid interface. Significant differences between cellular response after ZnO nanoparticle exposure under submerged and air-liquid interface conditions suggest that pharmaceutical and toxicological studies with inhaled (nano-)particles should be performed under the more realistic air-liquid interface, rather than submerged cell conditions.
Resumo:
Context-dependent behavior is becoming increasingly important for a wide range of application domains, from pervasive computing to common business applications. Unfortunately, mainstream programming languages do not provide mechanisms that enable software entities to adapt their behavior dynamically to the current execution context. This leads developers to adopt convoluted designs to achieve the necessary runtime flexibility. We propose a new programming technique called Context-oriented Programming (COP) which addresses this problem. COP treats context explicitly, and provides mechanisms to dynamically adapt behavior in reaction to changes in context, even after system deployment at runtime. In this paper we lay the foundations of COP, show how dynamic layer activation enables multi-dimensional dispatch, illustrate the application of COP by examples in several language extensions, and demonstrate that COP is largely independent of other commitments to programming style.
Resumo:
In this paper we provide a framework that enables the rapid development of applications using non-standard input devices. Flash is chosen as programming language since it can be used for quickly assembling applications. We overcome the difficulties of Flash to access external devices by introducing a very generic concept: The state information generated by input devices is transferred to a PC where a program collects them, interprets them and makes them available on a web server. Application developers can now integrate a Flash component that accesses the data stored in XML format and directly use it in their application.