913 resultados para cyber-physical systems (CPS)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Physical systems with co-existence and interplay of processes featuring distinct spatio-temporal scales are found in various research areas ranging from studies of brain activity to astrophysics. The complexity of such systems makes their theoretical and experimental analysis technically and conceptually challenging. Here, we discovered that while radiation of partially mode-locked fibre lasers is stochastic and intermittent on a short time scale, it exhibits non-trivial periodicity and long-scale correlations over slow evolution from one round-trip to another. A new technique for evolution mapping of intensity autocorrelation function has enabled us to reveal a variety of localized spatio-temporal structures and to experimentally study their symbiotic co-existence with stochastic radiation. Real-time characterization of dynamical spatio-temporal regimes of laser operation is set to bring new insights into rich underlying nonlinear physics of practical active- and passive-cavity photonic systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Various physical systems have dynamics that can be modeled by percolation processes. Percolation is used to study issues ranging from fluid diffusion through disordered media to fragmentation of a computer network caused by hacker attacks. A common feature of all of these systems is the presence of two non-coexistent regimes associated to certain properties of the system. For example: the disordered media can allow or not allow the flow of the fluid depending on its porosity. The change from one regime to another characterizes the percolation phase transition. The standard way of analyzing this transition uses the order parameter, a variable related to some characteristic of the system that exhibits zero value in one of the regimes and a nonzero value in the other. The proposal introduced in this thesis is that this phase transition can be investigated without the explicit use of the order parameter, but rather through the Shannon entropy. This entropy is a measure of the uncertainty degree in the information content of a probability distribution. The proposal is evaluated in the context of cluster formation in random graphs, and we apply the method to both classical percolation (Erd¨os- R´enyi) and explosive percolation. It is based in the computation of the entropy contained in the cluster size probability distribution and the results show that the transition critical point relates to the derivatives of the entropy. Furthermore, the difference between the smooth and abrupt aspects of the classical and explosive percolation transitions, respectively, is reinforced by the observation that the entropy has a maximum value in the classical transition critical point, while that correspondence does not occurs during the explosive percolation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Various physical systems have dynamics that can be modeled by percolation processes. Percolation is used to study issues ranging from fluid diffusion through disordered media to fragmentation of a computer network caused by hacker attacks. A common feature of all of these systems is the presence of two non-coexistent regimes associated to certain properties of the system. For example: the disordered media can allow or not allow the flow of the fluid depending on its porosity. The change from one regime to another characterizes the percolation phase transition. The standard way of analyzing this transition uses the order parameter, a variable related to some characteristic of the system that exhibits zero value in one of the regimes and a nonzero value in the other. The proposal introduced in this thesis is that this phase transition can be investigated without the explicit use of the order parameter, but rather through the Shannon entropy. This entropy is a measure of the uncertainty degree in the information content of a probability distribution. The proposal is evaluated in the context of cluster formation in random graphs, and we apply the method to both classical percolation (Erd¨os- R´enyi) and explosive percolation. It is based in the computation of the entropy contained in the cluster size probability distribution and the results show that the transition critical point relates to the derivatives of the entropy. Furthermore, the difference between the smooth and abrupt aspects of the classical and explosive percolation transitions, respectively, is reinforced by the observation that the entropy has a maximum value in the classical transition critical point, while that correspondence does not occurs during the explosive percolation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Geochemical barrier zones play an important role in determining various physical systems and characteristics of oceans, e.g. hydrodynamics, salinity, temperature and light. In the book each of more than 30 barrier zones are illustrated and defined by physical, chemical and biological parameters. Among the topics discussed are processes of inflow, transformation and precipitation of the sedimentary layer of the open oceans and more restricted areas such as the Baltic, Black and Mediterranean Seas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we will address together the magnetic and electrical properties of a particular semiconductor, the GaMnAs. The treatment will be done analytically in the first part of the work, according to the computational method for simulation of physical systems through the implementation of the expressions obtained in the first part. All study of magnetic contribution will be made using an interaction Kondo type, using an approach by Green functions. The electrical part, which consists of the Coulomb interactions between carriers and Mn ions, will be treated within the approach of multiple scattering. The implementation of the proposed method will calculate the Green functions converged as multiple scattering solution and use them as a starting point for the calculation of the effective magnetic interactions between Mn ions mediated charge carriers. The concentration parameters were varied for Mn ions and carriers as well. The combination of these two parameters can lead to insulating, metal samples with carriers in Fermi level to low or high mobility. As a result a correlation between the obtained carrier mobility and the strength of magnetic interaction. The greater mobility, the greater the intensity of the interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subspaces and manifolds are two powerful models for high dimensional signals. Subspaces model linear correlation and are a good fit to signals generated by physical systems, such as frontal images of human faces and multiple sources impinging at an antenna array. Manifolds model sources that are not linearly correlated, but where signals are determined by a small number of parameters. Examples are images of human faces under different poses or expressions, and handwritten digits with varying styles. However, there will always be some degree of model mismatch between the subspace or manifold model and the true statistics of the source. This dissertation exploits subspace and manifold models as prior information in various signal processing and machine learning tasks.

A near-low-rank Gaussian mixture model measures proximity to a union of linear or affine subspaces. This simple model can effectively capture the signal distribution when each class is near a subspace. This dissertation studies how the pairwise geometry between these subspaces affects classification performance. When model mismatch is vanishingly small, the probability of misclassification is determined by the product of the sines of the principal angles between subspaces. When the model mismatch is more significant, the probability of misclassification is determined by the sum of the squares of the sines of the principal angles. Reliability of classification is derived in terms of the distribution of signal energy across principal vectors. Larger principal angles lead to smaller classification error, motivating a linear transform that optimizes principal angles. This linear transformation, termed TRAIT, also preserves some specific features in each class, being complementary to a recently developed Low Rank Transform (LRT). Moreover, when the model mismatch is more significant, TRAIT shows superior performance compared to LRT.

The manifold model enforces a constraint on the freedom of data variation. Learning features that are robust to data variation is very important, especially when the size of the training set is small. A learning machine with large numbers of parameters, e.g., deep neural network, can well describe a very complicated data distribution. However, it is also more likely to be sensitive to small perturbations of the data, and to suffer from suffer from degraded performance when generalizing to unseen (test) data.

From the perspective of complexity of function classes, such a learning machine has a huge capacity (complexity), which tends to overfit. The manifold model provides us with a way of regularizing the learning machine, so as to reduce the generalization error, therefore mitigate overfiting. Two different overfiting-preventing approaches are proposed, one from the perspective of data variation, the other from capacity/complexity control. In the first approach, the learning machine is encouraged to make decisions that vary smoothly for data points in local neighborhoods on the manifold. In the second approach, a graph adjacency matrix is derived for the manifold, and the learned features are encouraged to be aligned with the principal components of this adjacency matrix. Experimental results on benchmark datasets are demonstrated, showing an obvious advantage of the proposed approaches when the training set is small.

Stochastic optimization makes it possible to track a slowly varying subspace underlying streaming data. By approximating local neighborhoods using affine subspaces, a slowly varying manifold can be efficiently tracked as well, even with corrupted and noisy data. The more the local neighborhoods, the better the approximation, but the higher the computational complexity. A multiscale approximation scheme is proposed, where the local approximating subspaces are organized in a tree structure. Splitting and merging of the tree nodes then allows efficient control of the number of neighbourhoods. Deviation (of each datum) from the learned model is estimated, yielding a series of statistics for anomaly detection. This framework extends the classical {\em changepoint detection} technique, which only works for one dimensional signals. Simulations and experiments highlight the robustness and efficacy of the proposed approach in detecting an abrupt change in an otherwise slowly varying low-dimensional manifold.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Markov Chain analysis was recently proposed to assess the time scales and preferential pathways into biological or physical networks by computing residence time, first passage time, rates of transfer between nodes and number of passages in a node. We propose to adapt an algorithm already published for simple systems to physical systems described with a high resolution hydrodynamic model. The method is applied to bays and estuaries on the Eastern Coast of Canada for their interest in shellfish aquaculture. Current velocities have been computed by using a 2 dimensional grid of elements and circulation patterns were summarized by averaging Eulerian flows between adjacent elements. Flows and volumes allow computing probabilities of transition between elements and to assess the average time needed by virtual particles to move from one element to another, the rate of transfer between two elements, and the average residence time of each system. We also combined transfer rates and times to assess the main pathways of virtual particles released in farmed areas and the potential influence of farmed areas on other areas. We suggest that Markov chain is complementary to other sets of ecological indicators proposed to analyse the interactions between farmed areas - e.g. depletion index, carrying capacity assessment. Markov Chain has several advantages with respect to the estimation of connectivity between pair of sites. It makes possible to estimate transfer rates and times at once in a very quick and efficient way, without the need to perform long term simulations of particle or tracer concentration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Single-cell functional proteomics assays can connect genomic information to biological function through quantitative and multiplex protein measurements. Tools for single-cell proteomics have developed rapidly over the past 5 years and are providing unique opportunities. This thesis describes an emerging microfluidics-based toolkit for single cell functional proteomics, focusing on the development of the single cell barcode chips (SCBCs) with applications in fundamental and translational cancer research.

The microchip designed to simultaneously quantify a panel of secreted, cytoplasmic and membrane proteins from single cells will be discussed at the beginning, which is the prototype for subsequent proteomic microchips with more sophisticated design in preclinical cancer research or clinical applications. The SCBCs are a highly versatile and information rich tool for single-cell functional proteomics. They are based upon isolating individual cells, or defined number of cells, within microchambers, each of which is equipped with a large antibody microarray (the barcode), with between a few hundred to ten thousand microchambers included within a single microchip. Functional proteomics assays at single-cell resolution yield unique pieces of information that significantly shape the way of thinking on cancer research. An in-depth discussion about analysis and interpretation of the unique information such as functional protein fluctuations and protein-protein correlative interactions will follow.

The SCBC is a powerful tool to resolve the functional heterogeneity of cancer cells. It has the capacity to extract a comprehensive picture of the signal transduction network from single tumor cells and thus provides insight into the effect of targeted therapies on protein signaling networks. We will demonstrate this point through applying the SCBCs to investigate three isogenic cell lines of glioblastoma multiforme (GBM).

The cancer cell population is highly heterogeneous with high-amplitude fluctuation at the single cell level, which in turn grants the robustness of the entire population. The concept that a stable population existing in the presence of random fluctuations is reminiscent of many physical systems that are successfully understood using statistical physics. Thus, tools derived from that field can probably be applied to using fluctuations to determine the nature of signaling networks. In the second part of the thesis, we will focus on such a case to use thermodynamics-motivated principles to understand cancer cell hypoxia, where single cell proteomics assays coupled with a quantitative version of Le Chatelier's principle derived from statistical mechanics yield detailed and surprising predictions, which were found to be correct in both cell line and primary tumor model.

The third part of the thesis demonstrates the application of this technology in the preclinical cancer research to study the GBM cancer cell resistance to molecular targeted therapy. Physical approaches to anticipate therapy resistance and to identify effective therapy combinations will be discussed in detail. Our approach is based upon elucidating the signaling coordination within the phosphoprotein signaling pathways that are hyperactivated in human GBMs, and interrogating how that coordination responds to the perturbation of targeted inhibitor. Strongly coupled protein-protein interactions constitute most signaling cascades. A physical analogy of such a system is the strongly coupled atom-atom interactions in a crystal lattice. Similar to decomposing the atomic interactions into a series of independent normal vibrational modes, a simplified picture of signaling network coordination can also be achieved by diagonalizing protein-protein correlation or covariance matrices to decompose the pairwise correlative interactions into a set of distinct linear combinations of signaling proteins (i.e. independent signaling modes). By doing so, two independent signaling modes – one associated with mTOR signaling and a second associated with ERK/Src signaling have been resolved, which in turn allow us to anticipate resistance, and to design combination therapies that are effective, as well as identify those therapies and therapy combinations that will be ineffective. We validated our predictions in mouse tumor models and all predictions were borne out.

In the last part, some preliminary results about the clinical translation of single-cell proteomics chips will be presented. The successful demonstration of our work on human-derived xenografts provides the rationale to extend our current work into the clinic. It will enable us to interrogate GBM tumor samples in a way that could potentially yield a straightforward, rapid interpretation so that we can give therapeutic guidance to the attending physicians within a clinical relevant time scale. The technical challenges of the clinical translation will be presented and our solutions to address the challenges will be discussed as well. A clinical case study will then follow, where some preliminary data collected from a pediatric GBM patient bearing an EGFR amplified tumor will be presented to demonstrate the general protocol and the workflow of the proposed clinical studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently the uncertain system has attracted much academic community from the standpoint of scientific research and also practical applications. A series of mathematical approaches emerge in order to troubleshoot the uncertainties of real physical systems. In this context, the work presented here focuses on the application of control theory in a nonlinear dynamical system with parametric variations in order and robustness. We used as the practical application of this work, a system of tanks Quanser associates, in a configuration, whose mathematical model is represented by a second order system with input and output (SISO). The control system is performed by PID controllers, designed by various techniques, aiming to achieve robust performance and stability when subjected to parameter variations. Other controllers are designed with the intention of comparing the performance and robust stability of such systems. The results are obtained and compared from simulations in Matlab-simulink.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents LABNET, an internet-based remote laboratory for control engineering education developed at UEM-University. At present, the remote laboratory integrates three basic physical systems (level control, temperature control and ship stabilizing system). In this paper, the LABNET architecture is presented and discussed in detail. Issues concerned with concurrent user access, local or remote feedback, automatic report generating and reusing of experiment’s templates have been addressed. Furthermore, the experiences gained developing, testing and using the system will be also presented and their consequences for future design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the self-scheduling problem of a price-taker having wind and thermal power production and assisted by a cyber-physical system for supporting management decisions in a day-ahead electric energy market. The self-scheduling is regarded as a stochastic mixed-integer linear programming problem. Uncertainties on electricity price and wind power are considered through a set of scenarios. Thermal units are modelled by start-up and variable costs, furthermore constraints are considered, such as: ramp up/down and minimum up/down time limits. The stochastic mixed-integer linear programming problem allows a decision support for strategies advantaging from an effective wind and thermal mixed bidding. A case study is presented using data from the Iberian electricity market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This master thesis work is focused on the development of a predictive EHC control function for a diesel plug-in hybrid electric vehicle equipped with a EURO 7 compliant exhaust aftertreatment system (EATS), with the purpose of showing the advantages provided by the implementation of a predictive control strategy with respect to a rule-based one. A preliminary step will be the definition of an accurate powertrain and EATS physical model, starting from already existing and validated applications. Then, a rule-based control strategy managing the torque split between the electric motor (EM) and the internal combustion engine (ICE) will be developed and calibrated, with the main target of limiting tailpipe NOx emission by taking into account EM and ICE operating conditions together with EATS conversion efficiency. The information available from vehicle connectivity will be used to reconstruct the future driving scenario, also referred to as electronic horizon (eHorizon), and in particular to predict ICE first start. Based on this knowledge, an EATS pre-heating phase can be planned to avoid low pollutant conversion efficiencies, thus preventing high NOx emission due to engine cold start. Consequently, the final NOx emission over the complete driving cycle will be strongly reduced, allowing to comply with the limits potentially set by the incoming EURO 7 regulation. Moreover, given the same NOx emission target, the gain achieved thanks to the implementation of an EHC predictive control function will allow to consider a simplified EATS layout, thus reducing the related manufacturing cost. The promising results achieved in terms of NOx emission reduction show the effectiveness of the application of a predictive control strategy focused on EATS thermal management and highlight the potential of a complete integration and parallel development of involved vehicle physical systems, control software and connectivity data management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The central aim of this dissertation is to introduce innovative methods, models, and tools to enhance the overall performance of supply chains responsible for handling perishable products. This concept of improved performance encompasses several critical dimensions, including enhanced efficiency in supply chain operations, product quality, safety, sustainability, waste generation minimization, and compliance with norms and regulations. The research is structured around three specific research questions that provide a solid foundation for delving into and narrowing down the array of potential solutions. These questions primarily concern enhancing the overall performance of distribution networks for perishable products and optimizing the package hierarchy, extending to unconventional packaging solutions. To address these research questions effectively, a well-defined research framework guides the approach. However, the dissertation adheres to an overarching methodological approach that comprises three fundamental aspects. The first aspect centers on the necessity of systematic data sampling and categorization, including identifying critical points within food supply chains. The data collected in this context must then be organized within a customized data structure designed to feed both cyber-physical and digital twins to quantify and analyze supply chain failures with a preventive perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deep Learning architectures give brilliant results in a large variety of fields, but a comprehensive theoretical description of their inner functioning is still lacking. In this work, we try to understand the behavior of neural networks by modelling in the frameworks of Thermodynamics and Condensed Matter Physics. We approach neural networks as in a real laboratory and we measure the frequency spectrum and the entropy of the weights of the trained model. The stochasticity of the training occupies a central role in the dynamics of the weights and makes it difficult to assimilate neural networks to simple physical systems. However, the analogy with Thermodynamics and the introduction of a well defined temperature leads us to an interesting result: if we eliminate from a CNN the "hottest" filters, the performance of the model remains the same, whereas, if we eliminate the "coldest" ones, the performance gets drastically worst. This result could be exploited in the realization of a training loop which eliminates the filters that do not contribute to loss reduction. In this way, the computational cost of the training will be lightened and more importantly this would be done by following a physical model. In any case, beside important practical applications, our analysis proves that a new and improved modeling of Deep Learning systems can pave the way to new and more efficient algorithms.