949 resultados para vector addition systems
Resumo:
The purpose of this paper is to use the framework of Lie algebroids to study optimal control problems for affine connection control systems (ACCSs) on Lie groups. In this context, the equations for critical trajectories of the problem are geometrically characterized as a Hamiltonian vector field.
Resumo:
A primary goal of context-aware systems is delivering the right information at the right place and right time to users in order to enable them to make effective decisions and improve their quality of life. There are three key requirements for achieving this goal: determining what information is relevant, personalizing it based on the users’ context (location, preferences, behavioral history etc.), and delivering it to them in a timely manner without an explicit request from them. These requirements create a paradigm that we term as “Proactive Context-aware Computing”. Most of the existing context-aware systems fulfill only a subset of these requirements. Many of these systems focus only on personalization of the requested information based on users’ current context. Moreover, they are often designed for specific domains. In addition, most of the existing systems are reactive - the users request for some information and the system delivers it to them. These systems are not proactive i.e. they cannot anticipate users’ intent and behavior and act proactively without an explicit request from them. In order to overcome these limitations, we need to conduct a deeper analysis and enhance our understanding of context-aware systems that are generic, universal, proactive and applicable to a wide variety of domains. To support this dissertation, we explore several directions. Clearly the most significant sources of information about users today are smartphones. A large amount of users’ context can be acquired through them and they can be used as an effective means to deliver information to users. In addition, social media such as Facebook, Flickr and Foursquare provide a rich and powerful platform to mine users’ interests, preferences and behavioral history. We employ the ubiquity of smartphones and the wealth of information available from social media to address the challenge of building proactive context-aware systems. We have implemented and evaluated a few approaches, including some as part of the Rover framework, to achieve the paradigm of Proactive Context-aware Computing. Rover is a context-aware research platform which has been evolving for the last 6 years. Since location is one of the most important context for users, we have developed ‘Locus’, an indoor localization, tracking and navigation system for multi-story buildings. Other important dimensions of users’ context include the activities that they are engaged in. To this end, we have developed ‘SenseMe’, a system that leverages the smartphone and its multiple sensors in order to perform multidimensional context and activity recognition for users. As part of the ‘SenseMe’ project, we also conducted an exploratory study of privacy, trust, risks and other concerns of users with smart phone based personal sensing systems and applications. To determine what information would be relevant to users’ situations, we have developed ‘TellMe’ - a system that employs a new, flexible and scalable approach based on Natural Language Processing techniques to perform bootstrapped discovery and ranking of relevant information in context-aware systems. In order to personalize the relevant information, we have also developed an algorithm and system for mining a broad range of users’ preferences from their social network profiles and activities. For recommending new information to the users based on their past behavior and context history (such as visited locations, activities and time), we have developed a recommender system and approach for performing multi-dimensional collaborative recommendations using tensor factorization. For timely delivery of personalized and relevant information, it is essential to anticipate and predict users’ behavior. To this end, we have developed a unified infrastructure, within the Rover framework, and implemented several novel approaches and algorithms that employ various contextual features and state of the art machine learning techniques for building diverse behavioral models of users. Examples of generated models include classifying users’ semantic places and mobility states, predicting their availability for accepting calls on smartphones and inferring their device charging behavior. Finally, to enable proactivity in context-aware systems, we have also developed a planning framework based on HTN planning. Together, these works provide a major push in the direction of proactive context-aware computing.
Resumo:
A vector field in n-space determines a competitive (or cooperative) system of differential equations provided all of the off-diagonal terms of its Jacobian matrix are nonpositive (or nonnegative). The main results in this article are the following. A cooperative system cannot have nonconstant attracting periodic solutions. In a cooperative system whose Jacobian matrices are irreducible the forward orbit converges for almost every point having compact forward orbit closure. In a cooperative system in 2 dimensions, every solution is eventually monotone. Applications are made to generalizations of positive feedback loops.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.
Resumo:
The electrical conductivity of solid-state matter is a fundamental physical property and can be precisely derived from the resistance measured via the four-point probe technique excluding contributions from parasitic contact resistances. Over time, this method has become an interdisciplinary characterization tool in materials science, semiconductor industries, geology, physics, etc, and is employed for both fundamental and application-driven research. However, the correct derivation of the conductivity is a demanding task which faces several difficulties, e.g. the homogeneity of the sample or the isotropy of the phases. In addition, these sample-specific characteristics are intimately related to technical constraints such as the probe geometry and size of the sample. In particular, the latter is of importance for nanostructures which can now be probed technically on very small length scales. On the occasion of the 100th anniversary of the four-point probe technique, introduced by Frank Wenner, in this review we revisit and discuss various correction factors which are mandatory for an accurate derivation of the resistivity from the measured resistance. Among others, sample thickness, dimensionality, anisotropy, and the relative size and geometry of the sample with respect to the contact assembly are considered. We are also able to derive the correction factors for 2D anisotropic systems on circular finite areas with variable probe spacings. All these aspects are illustrated by state-of-the-art experiments carried out using a four-tip STM/SEM system. We are aware that this review article can only cover some of the most important topics. Regarding further aspects, e.g. technical realizations, the influence of inhomogeneities or different transport regimes, etc, we refer to other review articles in this field.
Resumo:
The work presented herein covers a broad range of research topics and so, in the interest of clarity, has been presented in a portfolio format. Accordingly, each chapter consists of its own introductory material prior to presentation of the key results garnered, this is then proceeded by a short discussion on their significance. In the first chapter, a methodology to facilitate the resolution and qualitative assessment of very large inorganic polyoxometalates was designed and implemented employing ion-mobility mass spectrometry. Furthermore, the potential of this technique for ‘mapping’ the conformational space occupied by this class of materials was demonstrated. These claims are then substantiated by the development of a tuneable, polyoxometalate-based calibration protocol that provided the necessary platform for quantitative assessments of similarly large, but unknown, polyoxometalate species. In addition, whilst addressing a major limitation of travelling wave ion mobility, this result also highlighted the potential of this technique for solution-phase cluster discovery. The second chapter reports on the application of a biophotovoltaic electrochemical cell for characterising the electrogenic activity inherent to a number of mutant Synechocystis strains. The intention was to determine the key components in the photosynthetic electron transport chain responsible for extracellular electron transfer. This would help to address the significant lack of mechanistic understanding in this field. Finally, in the third chapter, the design and fabrication of a low-cost, highly modular, continuous cell culture system is presented. To demonstrate the advantages and suitability of this platform for experimental evolution investigations, an exploration into the photophysiological response to gradual iron limitation, in both the ancestral wild type and a randomly generated mutant library population, was undertaken. Furthermore, coupling random mutagenesis to continuous culture in this way is shown to constitute a novel source of genetic variation that is open to further investigation.
Resumo:
International audience
Resumo:
Reliability and dependability modeling can be employed during many stages of analysis of a computing system to gain insights into its critical behaviors. To provide useful results, realistic models of systems are often necessarily large and complex. Numerical analysis of these models presents a formidable challenge because the sizes of their state-space descriptions grow exponentially in proportion to the sizes of the models. On the other hand, simulation of the models requires analysis of many trajectories in order to compute statistically correct solutions. This dissertation presents a novel framework for performing both numerical analysis and simulation. The new numerical approach computes bounds on the solutions of transient measures in large continuous-time Markov chains (CTMCs). It extends existing path-based and uniformization-based methods by identifying sets of paths that are equivalent with respect to a reward measure and related to one another via a simple structural relationship. This relationship makes it possible for the approach to explore multiple paths at the same time,· thus significantly increasing the number of paths that can be explored in a given amount of time. Furthermore, the use of a structured representation for the state space and the direct computation of the desired reward measure (without ever storing the solution vector) allow it to analyze very large models using a very small amount of storage. Often, path-based techniques must compute many paths to obtain tight bounds. In addition to presenting the basic path-based approach, we also present algorithms for computing more paths and tighter bounds quickly. One resulting approach is based on the concept of path composition whereby precomputed subpaths are composed to compute the whole paths efficiently. Another approach is based on selecting important paths (among a set of many paths) for evaluation. Many path-based techniques suffer from having to evaluate many (unimportant) paths. Evaluating the important ones helps to compute tight bounds efficiently and quickly.
Resumo:
Kariba weed (Salvinia molesta) is an invasive alien waterweed that was first recorded in Uganda in sheltered bays of Lake Kyoga in June 2013. This waterweed has become a common feature on Lake Kyoga and its associated rivers, streams and swamps, and has spread to other lakes notably Kwania and Albert in addition to Lake Kimira in Bugiri district.
Resumo:
The immune system is able to produce antibodies, which have the capacity to recognize and to bind to foreign molecules or pathogenic organisms. Currently, there are a diversity of diseases that can be treated with antibodies, like immunoglobulins G (IgG). Thereby, the development of cost-efficient processes for their extraction and purification is an area of main interest in biotechnology. Aqueous biphasic systems (ABS) have been investigated for this purpose, once they allow the reduction of costs and the number of steps involved in the process, when compared with conventional methods. Nevertheless, typical ABS have not showed to be selective, resulting in low purification factors and yields. In this context, the addition of ionic liquids (ILs) as adjuvants can be a viable and potential alternative to tailor the selectivity of these systems. In this work, ABS composed of polyethylene glycol (PEG) of different molecular weight, and a biodegradable salt (potassium citrate) using ILs as adjuvants (5 wt%), were studied for the extraction and purification of IgG from a rabbit source. Initially, it was tested the extraction time, the effect on the molecular weight of PEG in a buffer solution of K3C6H5O7/C6H8O7 at pH≈7, and the effect of pH (59) on the yield (YIgG) and extraction efficiency (EEIgG%) of IgG. The best results regarding EEIgG% were achieved with a centrifugation step at 1000 rpm, during 10 min, in order to promote the separation of phases followed by 120 min of equilibrium. This procedure was then applied to the remaining experiments. The results obtained in the study of PEGs with different molecular weights, revealed a high affinity of IgG for the PEG-rich phase, and particularly for PEGs of lower molecular weight (EEIgG% of 96 % with PEG 400). On the other hand, the variation of pH in the buffer solution did not show a significant effect on the EEIgG%. Finally, it was evaluated the influence of the addition of different ILs (5% wt) on the IgG extraction in ABS composed of PEG 400 at pH≈7. In these studies, it was possible to obtain EEIgG% of 100% with the ILs composed of the anions [TOS]-, [CH3CO2]-and Cl-, although the obtained YIgG% were lower than 40%. On the other hand, the ILs composed of the anions Br-, as well as of the cation [C10mim]+, although not leading to EEIgG% of 100%, provide an increase in the YIgG%. ABS composed of PEG, a biodegradable organic salt and ILs as adjuvants, revealed to be an alternative and promising method to purify IgG. However, additional studies are still required in order to reduce the loss of IgG.
Resumo:
Caspian Sea with its unique characteristics is a significant source to supply required heat and moisture for passing weather systems over the north of Iran. Investigation of heat and moisture fluxes in the region and their effects on these systems that could lead to floods and major financial and human losses is essential in weather forecasting. Nowadays by improvement of numerical weather and climate prediction models and the increasing need to more accurate forecasting of heavy rainfall, the evaluation and verification of these models has been become much more important. In this study we have used the WRF model as a research-practical one with many valuable characteristics and flexibilities. In this research, the effects of heat and moisture fluxes of Caspian Sea on the synoptic and dynamical structure of 20 selective systems associated with heavy rainfall in the southern shores of Caspian Sea are investigated. These systems are selected based on the rainfall data gathered by three local stations named: Rasht, Babolsar and Gorgan in different seasons during a five-year period (2005-2010) with maximum amount of rainfall through the 24 hours of a day. In addition to synoptic analyses of these systems, the WRF model with and without surface flues was run using the two nested grids with the horizontal resolutions of 12 and 36 km. The results show that there are good consistencies between the predicted distribution of rainfall field, time of beginning and end of rainfall by the model and the observations. But the model underestimates the amounts of rainfall and the maximum difference with the observation is about 69%. Also, no significant changes in the results are seen when the domain and the resolution of computations are changed. The other noticeable point is that the systems are severely weakened by removing heat and moisture fluxes and thereby the amounts of large scale rainfall are decreased up to 77% and the convective rainfalls tend to zero.
Resumo:
Abstract One of the most important challenges of this decade is the Internet of Things (IoT) that pursues the integration of real-world objects in Internet. One of the key areas of the IoT is the Ambient Assisted Living (AAL) systems, which should be able to react to variable and continuous changes while ensuring their acceptance and adoption by users. This means that AAL systems need to work as self-adaptive systems. The autonomy property inherent to software agents, makes them a suitable choice for developing self-adaptive systems. However, agents lack the mechanisms to deal with the variability present in the IoT domain with regard to devices and network technologies. To overcome this limitation we have already proposed a Software Product Line (SPL) process for the development of self-adaptive agents in the IoT. Here we analyze the challenges that poses the development of self-adaptive AAL systems based on agents. To do so, we focus on the domain and application engineering of the self-adaptation concern of our SPL process. In addition, we provide a validation of our development process for AAL systems.
Resumo:
Dynamically reconfigurable hardware is a promising technology that combines in the same device both the high performance and the flexibility that many recent applications demand. However, one of its main drawbacks is the reconfiguration overhead, which involves important delays in the task execution, usually in the order of hundreds of milliseconds, as well as high energy consumption. One of the most powerful ways to tackle this problem is configuration reuse, since reusing a task does not involve any reconfiguration overhead. In this paper we propose a configuration replacement policy for reconfigurable systems that maximizes task reuse in highly dynamic environments. We have integrated this policy in an external taskgraph execution manager that applies task prefetch by loading and executing the tasks as soon as possible (ASAP). However, we have also modified this ASAP technique in order to make the replacements more flexible, by taking into account the mobility of the tasks and delaying some of the reconfigurations. In addition, this replacement policy is a hybrid design-time/run-time approach, which performs the bulk of the computations at design time in order to save run-time computations. Our results illustrate that the proposed strategy outperforms other state-ofthe-art replacement policies in terms of reuse rates and achieves near-optimal reconfiguration overhead reductions. In addition, by performing the bulk of the computations at design time, we reduce the execution time of the replacement technique by 10 times with respect to an equivalent purely run-time one.
Resumo:
New generation embedded systems demand high performance, efficiency and flexibility. Reconfigurable hardware can provide all these features. However the costly reconfiguration process and the lack of management support have prevented a broader use of these resources. To solve these issues we have developed a scheduler that deals with task-graphs at run-time, steering its execution in the reconfigurable resources while carrying out both prefetch and replacement techniques that cooperate to hide most of the reconfiguration delays. In our scheduling environment task-graphs are analyzed at design-time to extract useful information. This information is used at run-time to obtain near-optimal schedules, escaping from local-optimum decisions, while only carrying out simple computations. Moreover, we have developed a hardware implementation of the scheduler that applies all the optimization techniques while introducing a delay of only a few clock cycles. In the experiments our scheduler clearly outperforms conventional run-time schedulers based on As-Soon-As-Possible techniques. In addition, our replacement policy, specially designed for reconfigurable systems, achieves almost optimal results both regarding reuse and performance.