932 resultados para Dynamic environments


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The change detection technique was used in this study to provide preliminary information on the dynamics of land cover in the region over the western basin of the Tiete River. This area is characterized by sequence of reservoirs and intense agricultural activity, triggering series negative effects. One of the impacts is contamination and proliferation of aquatic organisms in these aquatics environments, increased by release of nutrients from human activities. This work was possible to observe a large switching classes and secondary vegetation bare land, probably related to agricultural activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing diffusion of wireless-enabled portable devices is pushing toward the design of novel service scenarios, promoting temporary and opportunistic interactions in infrastructure-less environments. Mobile Ad Hoc Networks (MANET) are the general model of these higly dynamic networks that can be specialized, depending on application cases, in more specific and refined models such as Vehicular Ad Hoc Networks and Wireless Sensor Networks. Two interesting deployment cases are of increasing relevance: resource diffusion among users equipped with portable devices, such as laptops, smart phones or PDAs in crowded areas (termed dense MANET) and dissemination/indexing of monitoring information collected in Vehicular Sensor Networks. The extreme dynamicity of these scenarios calls for novel distributed protocols and services facilitating application development. To this aim we have designed middleware solutions supporting these challenging tasks. REDMAN manages, retrieves, and disseminates replicas of software resources in dense MANET; it implements novel lightweight protocols to maintain a desired replication degree despite participants mobility, and efficiently perform resource retrieval. REDMAN exploits the high-density assumption to achieve scalability and limited network overhead. Sensed data gathering and distributed indexing in Vehicular Networks raise similar issues: we propose a specific middleware support, called MobEyes, exploiting node mobility to opportunistically diffuse data summaries among neighbor vehicles. MobEyes creates a low-cost opportunistic distributed index to query the distributed storage and to determine the location of needed information. Extensive validation and testing of REDMAN and MobEyes prove the effectiveness of our original solutions in limiting communication overhead while maintaining the required accuracy of replication degree and indexing completeness, and demonstrates the feasibility of the middleware approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dynamicity and heterogeneity that characterize pervasive environments raise new challenges in the design of mobile middleware. Pervasive environments are characterized by a significant degree of heterogeneity, variability, and dynamicity that conventional middleware solutions are not able to adequately manage. Originally designed for use in a relatively static context, such middleware systems tend to hide low-level details to provide applications with a transparent view on the underlying execution platform. In mobile environments, however, the context is extremely dynamic and cannot be managed by a priori assumptions. Novel middleware should therefore support mobile computing applications in the task of adapting their behavior to frequent changes in the execution context, that is, it should become context-aware. In particular, this thesis has identified the following key requirements for novel context-aware middleware that existing solutions do not fulfil yet. (i) Middleware solutions should support interoperability between possibly unknown entities by providing expressive representation models that allow to describe interacting entities, their operating conditions and the surrounding world, i.e., their context, according to an unambiguous semantics. (ii) Middleware solutions should support distributed applications in the task of reconfiguring and adapting their behavior/results to ongoing context changes. (iii) Context-aware middleware support should be deployed on heterogeneous devices under variable operating conditions, such as different user needs, application requirements, available connectivity and device computational capabilities, as well as changing environmental conditions. Our main claim is that the adoption of semantic metadata to represent context information and context-dependent adaptation strategies allows to build context-aware middleware suitable for all dynamically available portable devices. Semantic metadata provide powerful knowledge representation means to model even complex context information, and allow to perform automated reasoning to infer additional and/or more complex knowledge from available context data. In addition, we suggest that, by adopting proper configuration and deployment strategies, semantic support features can be provided to differentiated users and devices according to their specific needs and current context. This thesis has investigated novel design guidelines and implementation options for semantic-based context-aware middleware solutions targeted to pervasive environments. These guidelines have been applied to different application areas within pervasive computing that would particularly benefit from the exploitation of context. Common to all applications is the key role of context in enabling mobile users to personalize applications based on their needs and current situation. The main contributions of this thesis are (i) the definition of a metadata model to represent and reason about context, (ii) the definition of a model for the design and development of context-aware middleware based on semantic metadata, (iii) the design of three novel middleware architectures and the development of a prototypal implementation for each of these architectures, and (iv) the proposal of a viable approach to portability issues raised by the adoption of semantic support services in pervasive applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The wide diffusion of cheap, small, and portable sensors integrated in an unprecedented large variety of devices and the availability of almost ubiquitous Internet connectivity make it possible to collect an unprecedented amount of real time information about the environment we live in. These data streams, if properly and timely analyzed, can be exploited to build new intelligent and pervasive services that have the potential of improving people's quality of life in a variety of cross concerning domains such as entertainment, health-care, or energy management. The large heterogeneity of application domains, however, calls for a middleware-level infrastructure that can effectively support their different quality requirements. In this thesis we study the challenges related to the provisioning of differentiated quality-of-service (QoS) during the processing of data streams produced in pervasive environments. We analyze the trade-offs between guaranteed quality, cost, and scalability in streams distribution and processing by surveying existing state-of-the-art solutions and identifying and exploring their weaknesses. We propose an original model for QoS-centric distributed stream processing in data centers and we present Quasit, its prototype implementation offering a scalable and extensible platform that can be used by researchers to implement and validate novel QoS-enforcement mechanisms. To support our study, we also explore an original class of weaker quality guarantees that can reduce costs when application semantics do not require strict quality enforcement. We validate the effectiveness of this idea in a practical use-case scenario that investigates partial fault-tolerance policies in stream processing by performing a large experimental study on the prototype of our novel LAAR dynamic replication technique. Our modeling, prototyping, and experimental work demonstrates that, by providing data distribution and processing middleware with application-level knowledge of the different quality requirements associated to different pervasive data flows, it is possible to improve system scalability while reducing costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transportprozesse von anisotropen metallischen Nanopartikeln wie zum Beispiel Gold-Nanostäbchen in komplexen Flüssigkeiten und/oder begrenzten Geometrien spielen eine bedeutende Rolle in einer Vielzahl von biomedizinischen und industriellen Anwendungen. Ein Weg zu einem tiefen, grundlegenden Verständnis von Transportmechanismen ist die Verwendung zweier leistungsstarker Methoden - dynamischer Lichtstreuung (DLS) und resonanzverstärkter Lichtstreuung (REDLS) in der Nähe einer Grenzfläche. In dieser Arbeit wurden nanomolare Suspensionen von Gold-Nanostäbchen, stabilisiert mit Cetyltrimethylammoniumbromid (CTAB), mit DLS sowie in der Nähe einer Grenzfläche mit REDLS untersucht. Mit DLS wurde eine wellenlängenabhängige Verstärkung der anisotropen Streuung beobachtet, welche sich durch die Anregung von longitudinaler Oberflächenplasmonenresonanz ergibt. Die hohe Streuintensität nahe der longitudinalen Oberflächenplasmonenresonanzfrequenz für Stäbchen, welche parallel zum anregenden optischen Feld liegen, erlaubte die Auflösung der translationalen Anisotropie in einem isotropen Medium. Diese wellenlängenabhängige anisotrope Lichtstreuung ermöglicht neue Anwendungen wie etwa die Untersuchung der Dynamik einzelner Partikel in komplexen Umgebungen mittels depolarisierter dynamischer Lichtstreuung. In der Nähe einer Grenzfläche wurde eine starke Verlangsamung der translationalen Diffusion beobachtet. Hingegen zeigte sich für die Rotation zwar eine ausgeprägte aber weniger starke Verlangsamung. Um den möglichen Einfluss von Ladung auf der festen Grenzfläche zu untersuchen, wurde das Metall mit elektrisch neutralem Polymethylmethacrylat (PMMA) beschichtet. In einem weiteren Ansatz wurde das CTAB in der Gold-Nanostäbchen Lösung durch das kovalent gebundene 16-Mercaptohexadecyltrimethylammoniumbromid (MTAB) ersetzt. Daraus ergab sich eine deutlich geringere Verlangsamung.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral thesis presents the computational work and synthesis with experiments for internal (tube and channel geometries) as well as external (flow of a pure vapor over a horizontal plate) condensing flows. The computational work obtains accurate numerical simulations of the full two dimensional governing equations for steady and unsteady condensing flows in gravity/0g environments. This doctoral work investigates flow features, flow regimes, attainability issues, stability issues, and responses to boundary fluctuations for condensing flows in different flow situations. This research finds new features of unsteady solutions of condensing flows; reveals interesting differences in gravity and shear driven situations; and discovers novel boundary condition sensitivities of shear driven internal condensing flows. Synthesis of computational and experimental results presented here for gravity driven in-tube flows lays framework for the future two-phase component analysis in any thermal system. It is shown for both gravity and shear driven internal condensing flows that steady governing equations have unique solutions for given inlet pressure, given inlet vapor mass flow rate, and fixed cooling method for condensing surface. But unsteady equations of shear driven internal condensing flows can yield different “quasi-steady” solutions based on different specifications of exit pressure (equivalently exit mass flow rate) concurrent to the inlet pressure specification. This thesis presents a novel categorization of internal condensing flows based on their sensitivity to concurrently applied boundary (inlet and exit) conditions. The computational investigations of an external shear driven flow of vapor condensing over a horizontal plate show limits of applicability of the analytical solution. Simulations for this external condensing flow discuss its stability issues and throw light on flow regime transitions because of ever-present bottom wall vibrations. It is identified that laminar to turbulent transition for these flows can get affected by ever present bottom wall vibrations. Detailed investigations of dynamic stability analysis of this shear driven external condensing flow result in the introduction of a new variable, which characterizes the ratio of strength of the underlying stabilizing attractor to that of destabilizing vibrations. Besides development of CFD tools and computational algorithms, direct application of research done for this thesis is in effective prediction and design of two-phase components in thermal systems used in different applications. Some of the important internal condensing flow results about sensitivities to boundary fluctuations are also expected to be applicable to flow boiling phenomenon. Novel flow sensitivities discovered through this research, if employed effectively after system level analysis, will result in the development of better control strategies in ground and space based two-phase thermal systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present redirection techniques that support exploration of large-scale virtual environments (VEs) by means of real walking. We quantify to what degree users can unknowingly be redirected in order to guide them through VEs in which virtual paths differ from the physical paths. We further introduce the concept of dynamic passive haptics by which any number of virtual objects can be mapped to real physical proxy props having similar haptic properties (i. e., size, shape, and surface structure), such that the user can sense these virtual objects by touching their real world counterparts. Dynamic passive haptics provides the user with the illusion of interacting with a desired virtual object by redirecting her to the corresponding proxy prop. We describe the concepts of generic redirected walking and dynamic passive haptics and present experiments in which we have evaluated these concepts. Furthermore, we discuss implications that have been derived from a user study, and we present approaches that derive physical paths which may vary from the virtual counterparts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The volume consists of twenty-five chapters selected from among peer-reviewed papers presented at the CELDA (Cognition and Exploratory Learning in the Digital Age) 2013 Conference held in Fort Worth, Texas, USA, in October 2013 and also from world class scholars in e-learning systems, environments and approaches. The following sub-topics are included: Exploratory Learning Technologies (Part I), e-Learning social web design (Part II), Learner communities through e-Learning implementations (Part III), Collaborative and student-centered e-Learning design (Part IV). E-Learning has been, since its initial stages, a synonym for flexibility. While this dynamic nature has mainly been associated with time and space it is safe to argue that currently it embraces other aspects such as the learners’ profile, the scope of subjects that can be taught electronically and the technology it employs. New technologies also widen the range of activities and skills developed in e-Learning. Electronic learning environments have evolved past the exclusive delivery of knowledge. Technology has endowed e-Learning with the possibility of remotely fomenting problem solving skills, critical thinking and team work, by investing in information exchange, collaboration, personalisation and community building.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present the Cellular Dynamic Simulator (CDS) for simulating diffusion and chemical reactions within crowded molecular environments. CDS is based on a novel event driven algorithm specifically designed for precise calculation of the timing of collisions, reactions and other events for each individual molecule in the environment. Generic mesh based compartments allow the creation / importation of very simple or detailed cellular structures that exist in a 3D environment. Multiple levels of compartments and static obstacles can be used to create a dense environment to mimic cellular boundaries and the intracellular space. The CDS algorithm takes into account volume exclusion and molecular crowding that may impact signaling cascades in small sub-cellular compartments such as dendritic spines. With the CDS, we can simulate simple enzyme reactions; aggregation, channel transport, as well as highly complicated chemical reaction networks of both freely diffusing and membrane bound multi-protein complexes. Components of the CDS are generally defined such that the simulator can be applied to a wide range of environments in terms of scale and level of detail. Through an initialization GUI, a simple simulation environment can be created and populated within minutes yet is powerful enough to design complex 3D cellular architecture. The initialization tool allows visual confirmation of the environment construction prior to execution by the simulator. This paper describes the CDS algorithm, design implementation, and provides an overview of the types of features available and the utility of those features are highlighted in demonstrations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning analytics is the analysis of static and dynamic data extracted from virtual learning environments, in order to understand and optimize the learning process. Generally, this dynamic data is generated by the interactions which take place in the virtual learning environment. At the present time, many implementations for grouping of data have been proposed, but there is no consensus yet on which interactions and groups must be measured and analyzed. There is also no agreement on what is the influence of these interactions, if any, on learning outcomes, academic performance or student success. This study presents three different extant interaction typologies in e-learning and analyzes the relation of their components with students? academic performance. The three different classifications are based on the agents involved in the learning process, the frequency of use and the participation mode, respectively. The main findings from the research are: a) that agent-based classifications offer a better explanation of student academic performance; b) that at least one component in each typology predicts academic performance; and c) that student-teacher and student-student, evaluating students, and active interactions, respectively, have a significant impact on academic performance, while the other interaction types are not significantly related to academic performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Systems used for target localization, such as goods, individuals, or animals, commonly rely on operational means to meet the final application demands. However, what would happen if some means were powered up randomly by harvesting systems? And what if those devices not randomly powered had their duty cycles restricted? Under what conditions would such an operation be tolerable in localization services? What if the references provided by nodes in a tracking problem were distorted? Moreover, there is an underlying topic common to the previous questions regarding the transfer of conceptual models to reality in field tests: what challenges are faced upon deploying a localization network that integrates energy harvesting modules? The application scenario of the system studied is a traditional herding environment of semi domesticated reindeer (Rangifer tarandus tarandus) in northern Scandinavia. In these conditions, information on approximate locations of reindeer is as important as environmental preservation. Herders also need cost-effective devices capable of operating unattended in, sometimes, extreme weather conditions. The analyses developed are worthy not only for the specific application environment presented, but also because they may serve as an approach to performance of navigation systems in absence of reasonably accurate references like the ones of the Global Positioning System (GPS). A number of energy-harvesting solutions, like thermal and radio-frequency harvesting, do not commonly provide power beyond one milliwatt. When they do, battery buffers may be needed (as it happens with solar energy) which may raise costs and make systems more dependent on environmental temperatures. In general, given our problem, a harvesting system is needed that be capable of providing energy bursts of, at least, some milliwatts. Many works on localization problems assume that devices have certain capabilities to determine unknown locations based on range-based techniques or fingerprinting which cannot be assumed in the approach considered herein. The system presented is akin to range-free techniques, but goes to the extent of considering very low node densities: most range-free techniques are, therefore, not applicable. Animal localization, in particular, uses to be supported by accurate devices such as GPS collars which deplete batteries in, maximum, a few days. Such short-life solutions are not particularly desirable in the framework considered. In tracking, the challenge may times addressed aims at attaining high precision levels from complex reliable hardware and thorough processing techniques. One of the challenges in this Thesis is the use of equipment with just part of its facilities in permanent operation, which may yield high input noise levels in the form of distorted reference points. The solution presented integrates a kinetic harvesting module in some nodes which are expected to be a majority in the network. These modules are capable of providing power bursts of some milliwatts which suffice to meet node energy demands. The usage of harvesting modules in the aforementioned conditions makes the system less dependent on environmental temperatures as no batteries are used in nodes with harvesters--it may be also an advantage in economic terms. There is a second kind of nodes. They are battery powered (without kinetic energy harvesters), and are, therefore, dependent on temperature and battery replacements. In addition, their operation is constrained by duty cycles in order to extend node lifetime and, consequently, their autonomy. There is, in turn, a third type of nodes (hotspots) which can be static or mobile. They are also battery-powered, and are used to retrieve information from the network so that it is presented to users. The system operational chain starts at the kinetic-powered nodes broadcasting their own identifier. If an identifier is received at a battery-powered node, the latter stores it for its records. Later, as the recording node meets a hotspot, its full record of detections is transferred to the hotspot. Every detection registry comprises, at least, a node identifier and the position read from its GPS module by the battery-operated node previously to detection. The characteristics of the system presented make the aforementioned operation own certain particularities which are also studied. First, identifier transmissions are random as they depend on movements at kinetic modules--reindeer movements in our application. Not every movement suffices since it must overcome a certain energy threshold. Second, identifier transmissions may not be heard unless there is a battery-powered node in the surroundings. Third, battery-powered nodes do not poll continuously their GPS module, hence localization errors rise even more. Let's recall at this point that such behavior is tight to the aforementioned power saving policies to extend node lifetime. Last, some time is elapsed between the instant an identifier random transmission is detected and the moment the user is aware of such a detection: it takes some time to find a hotspot. Tracking is posed as a problem of a single kinetically-powered target and a population of battery-operated nodes with higher densities than before in localization. Since the latter provide their approximate positions as reference locations, the study is again focused on assessing the impact of such distorted references on performance. Unlike in localization, distance-estimation capabilities based on signal parameters are assumed in this problem. Three variants of the Kalman filter family are applied in this context: the regular Kalman filter, the alpha-beta filter, and the unscented Kalman filter. The study enclosed hereafter comprises both field tests and simulations. Field tests were used mainly to assess the challenges related to power supply and operation in extreme conditions as well as to model nodes and some aspects of their operation in the application scenario. These models are the basics of the simulations developed later. The overall system performance is analyzed according to three metrics: number of detections per kinetic node, accuracy, and latency. The links between these metrics and the operational conditions are also discussed and characterized statistically. Subsequently, such statistical characterization is used to forecast performance figures given specific operational parameters. In tracking, also studied via simulations, nonlinear relationships are found between accuracy and duty cycles and cluster sizes of battery-operated nodes. The solution presented may be more complex in terms of network structure than existing solutions based on GPS collars. However, its main gain lies on taking advantage of users' error tolerance to reduce costs and become more environmentally friendly by diminishing the potential amount of batteries that can be lost. Whether it is applicable or not depends ultimately on the conditions and requirements imposed by users' needs and operational environments, which is, as it has been explained, one of the topics of this Thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Providing security to the emerging field of ambient intelligence will be difficult if we rely only on existing techniques, given their dynamic and heterogeneous nature. Moreover, security demands of these systems are expected to grow, as many applications will require accurate context modeling. In this work we propose an enhancement to the reputation systems traditionally deployed for securing these systems. Different anomaly detectors are combined using the immunological paradigm to optimize reputation system performance in response to evolving security requirements. As an example, the experiments show how a combination of detectors based on unsupervised techniques (self-organizing maps and genetic algorithms) can help to significantly reduce the global response time of the reputation system. The proposed solution offers many benefits: scalability, fast response to adversarial activities, ability to detect unknown attacks, high adaptability, and high ability in detecting and confining attacks. For these reasons, we believe that our solution is capable of coping with the dynamism of ambient intelligence systems and the growing requirements of security demands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low-cost systems that can obtain a high-quality foreground segmentation almostindependently of the existing illumination conditions for indoor environments are verydesirable, especially for security and surveillance applications. In this paper, a novelforeground segmentation algorithm that uses only a Kinect depth sensor is proposedto satisfy the aforementioned system characteristics. This is achieved by combininga mixture of Gaussians-based background subtraction algorithm with a new Bayesiannetwork that robustly predicts the foreground/background regions between consecutivetime steps. The Bayesian network explicitly exploits the intrinsic characteristics ofthe depth data by means of two dynamic models that estimate the spatial and depthevolution of the foreground/background regions. The most remarkable contribution is thedepth-based dynamic model that predicts the changes in the foreground depth distributionbetween consecutive time steps. This is a key difference with regard to visible imagery,where the color/gray distribution of the foreground is typically assumed to be constant.Experiments carried out on two different depth-based databases demonstrate that theproposed combination of algorithms is able to obtain a more accurate segmentation of theforeground/background than other state-of-the art approaches.