904 resultados para Distributed measurement and control
Resumo:
We examined sequence variation in the mitochondrial cytochrome b gene (1140 bp, n = 73) and control region (842-851 bp, n = 74) in the Eurasian harvest mouse (Micromys minutus (Pallas, 1771)), with samples drawn from across its range, from Western Europe to Japan. Phylogeographic analyses revealed region-specific haplotype groupings combined with overall low levels of inter-regional genetic divergence. Despite the enormous intervening distance, European and East Asian samples showed a net nucleotide divergence of only 0.36%. Based on an evolutionary rate for the cytochrome b gene of 2.4%(.)(site(.)lineage(.)million years)(-1), the initial divergence time of these populations is estimated at around 80 000 years before present. Our findings are consistent with available fossil evidence that has recorded repeated cycles of extinction and recolonization of Europe by M. minutus through the Quaternary. The molecular data further suggest that recolonization occurred from refugia in the Central to East Asian region. Japanese haplotypes of M. minutus, with the exception of those from Tsushima Is., show limited nucleotide diversity (0.15%) compared with those found on the adjacent Korean Peninsula. This finding suggests recent colonization of the Japanese Archipelago, probably around the last glacial period, followed by rapid population growth.
Resumo:
Ambulatory blood pressure monitoring (ABPM) has become indispensable for the diagnosis and control of hypertension. However, no consensus exists on how daytime and nighttime periods should be defined. OBJECTIVE: To compare daytime and nighttime blood pressure (BP) defined by an actigraph and by body position with BP resulting from arbitrary daytime and nighttime periods. PATIENTS AND METHOD: ABPM, sleeping periods and body position were recorded simultaneously using an actigraph (SenseWear Armband(®)) in patients referred for ABPM. BP results obtained with the actigraph (sleep and position) were compared to the results obtained with fixed daytime (7a.m.-10p.m.) and nighttime (10p.m.-7a.m.) periods. RESULTS: Data from 103 participants were available. More than half of them were taking antihypertensive drugs. Nocturnal BP was lower (systolic BP: 2.08±4.50mmHg; diastolic BP: 1.84±2.99mmHg, P<0.05) and dipping was more marked (systolic BP: 1.54±3.76%; diastolic BP: 2.27±3.48%, P<0.05) when nighttime was defined with the actigraph. Standing BP was higher (systolic BP 1.07±2.81mmHg; diastolic BP: 1.34±2.50mmHg) than daytime BP defined by a fixed period. CONCLUSION: Diurnal BP, nocturnal BP and dipping are influenced by the definition of daytime and nighttime periods. Studies evaluating the prognostic value of each method are needed to clarify which definition should be used.
Resumo:
Due to the intense international competition, demanding, and sophisticated customers, and diverse transforming technological change, organizations need to renew their products and services by allocating resources on research and development (R&D). Managing R&D is complex, but vital for many organizations to survive in the dynamic, turbulent environment. Thus, the increased interest among decision-makers towards finding the right performance measures for R&D is understandable. The measures or evaluation methods of R&D performance can be utilized for multiple purposes; for strategic control, for justifying the existence of R&D, for providing information and improving activities, as well as for the purposes of motivating and benchmarking. The earlier research in the field of R&D performance analysis has generally focused on either the activities and considerable factors and dimensions - e.g. strategic perspectives, purposes of measurement, levels of analysis, types of R&D or phases of R&D process - prior to the selection of R&Dperformance measures, or on proposed principles or actual implementation of theselection or design processes of R&D performance measures or measurement systems. This study aims at integrating the consideration of essential factors anddimensions of R&D performance analysis to developed selection processes of R&D measures, which have been applied in real-world organizations. The earlier models for corporate performance measurement that can be found in the literature, are to some extent adaptable also to the development of measurement systemsand selecting the measures in R&D activities. However, it is necessary to emphasize the special aspects related to the measurement of R&D performance in a way that make the development of new approaches for especially R&D performance measure selection necessary: First, the special characteristics of R&D - such as the long time lag between the inputs and outcomes, as well as the overall complexity and difficult coordination of activities - influence the R&D performance analysis problems, such as the need for more systematic, objective, balanced and multi-dimensional approaches for R&D measure selection, as well as the incompatibility of R&D measurement systems to other corporate measurement systems and vice versa. Secondly, the above-mentioned characteristics and challenges bring forth the significance of the influencing factors and dimensions that need to be recognized in order to derive the selection criteria for measures and choose the right R&D metrics, which is the most crucial step in the measurement system development process. The main purpose of this study is to support the management and control of the research and development activities of organizations by increasing the understanding of R&D performance analysis, clarifying the main factors related to the selection of R&D measures and by providing novel types of approaches and methods for systematizing the whole strategy- and business-based selection and development process of R&D indicators.The final aim of the research is to support the management in their decision making of R&D with suitable, systematically chosen measures or evaluation methods of R&D performance. Thus, the emphasis in most sub-areas of the present research has been on the promotion of the selection and development process of R&D indicators with the help of the different tools and decision support systems, i.e. the research has normative features through providing guidelines by novel types of approaches. The gathering of data and conducting case studies in metal and electronic industry companies, in the information and communications technology (ICT) sector, and in non-profit organizations helped us to formulate a comprehensive picture of the main challenges of R&D performance analysis in different organizations, which is essential, as recognition of the most importantproblem areas is a very crucial element in the constructive research approach utilized in this study. Multiple practical benefits regarding the defined problemareas could be found in the various constructed approaches presented in this dissertation: 1) the selection of R&D measures became more systematic when compared to the empirical analysis, as it was common that there were no systematic approaches utilized in the studied organizations earlier; 2) the evaluation methods or measures of R&D chosen with the help of the developed approaches can be more directly utilized in the decision-making, because of the thorough consideration of the purpose of measurement, as well as other dimensions of measurement; 3) more balance to the set of R&D measures was desired and gained throughthe holistic approaches to the selection processes; and 4) more objectivity wasgained through organizing the selection processes, as the earlier systems were considered subjective in many organizations. Scientifically, this dissertation aims to make a contribution to the present body of knowledge of R&D performance analysis by facilitating dealing with the versatility and challenges of R&D performance analysis, as well as the factors and dimensions influencing the selection of R&D performance measures, and by integrating these aspects to the developed novel types of approaches, methods and tools in the selection processes of R&D measures, applied in real-world organizations. In the whole research, facilitation of dealing with the versatility and challenges in R&D performance analysis, as well as the factors and dimensions influencing the R&D performance measure selection are strongly integrated with the constructed approaches. Thus, the research meets the above-mentioned purposes and objectives of the dissertation from the scientific as well as from the practical point of view.
Resumo:
Verenpaineen kotimittaus − epidemiologia ja kliininen käyttö Kohonnutta verenpainetta, maailmanlaajuisesti merkittävintä ennenaikaiselle kuolemalle altistavaa riskitekijää, ei voida tunnistaa tai hoitaa ilman tarkkoja ja käytännöllisiä verenpaineen mittausmenetelmiä. Verenpaineen kotimittaus on saavuttanut suuren suosion potilaiden keskuudessa. Lääkärit eivät ole kuitenkaan vielä täysin hyväksyneet verenpaineen kotimittausta, sillä riittävä todistusaineisto sen toimivuudesta ja eduista on puuttunut. Tämän tutkimuksen tarkoituksena oli osoittaa, että kotona mitattu verenpaine (kotipaine) on perinteistä vastaanotolla mitattua verenpainetta (vastaanottopaine) tarkempi, ja että se on tehokas myös kliinisessä käytössä. Tutkimme kotipaineen käyttöä verenpainetaudin diagnosoinnissa ja hoidossa. Lisäksi tarkastelimme kotipaineen yhteyttä verenpainetaudin aiheuttamiin kohde-elinvaurioihin. Ensimmäinen aineisto, joka oli edustava otos Suomen aikuisväestöstä, koostui 2 120 45–74-vuotiaasta tutkimushenkilöstä. Tutkittavat mittasivat kotipainettaan viikon ajan ja osallistuivat terveystarkastukseen, johon sisältyi kliinisen tutkimuksen ja haastattelun lisäksi sydänfilmin otto ja vastaanottopaineen mittaus. 758 tutkittavalle suoritettiin lisäksi kaulavaltimon seinämän intima-mediakerroksen paksuuden (valtimonkovettumataudin mittari) mittaus ja 237:lle valtimon pulssiaallon nopeuden (valtimojäykkyyden mittari) mittaus. Toisessa aineistossa, joka koostui 98 verenpainetautia sairastavasta potilaasta, hoitoa ohjattiin satunnaistamisesta riippuen joko ambulatorisen eli vuorokausirekisteröinnillä mitatun verenpaineen tai kotipaineen perusteella. Vastaanottopaine oli kotipainetta merkittävästi korkeampi (systolisen/diastolisen paineen keskiarvoero oli 8/3 mmHg) ja yksimielisyys verenpainetaudin diagnoosissa kahden menetelmän välillä oli korkeintaan kohtalainen (75 %). 593 tutkittavasta, joilla oli kohonnut verenpaine vastaanotolla, 38 %:lla oli normaali verenpaine kotona eli ns. valkotakkiverenpaine. Verenpainetauti voidaan siis ylidiagnosoida joka kolmannella potilaalla seulontatilanteessa. Valkotakkiverenpaine oli yhteydessä lievästi kohonneeseen verenpaineeseen, matalaan painoindeksiin ja tupakoimattomuuteen, muttei psykiatriseen sairastavuuteen. Valkotakkiverenpaine ei kuitenkaan vaikuttaisi olevan täysin vaaraton ilmiö ja voi ennustaa tulevaa verenpainetautia, sillä siitä kärsivien sydän- ja verisuonitautien riskitekijäprofiili oli normaalipaineisten ja todellisten verenpainetautisten riskitekijäprofiilien välissä. Kotipaineella oli vastaanottopainetta vahvempi yhteys verenpainetaudin aiheuttamiin kohde-elinvaurioihin (intima-mediakerroksen paksuus, pulssiaallon nopeus ja sydänfilmistä todettu vasemman kammion suureneminen). Kotipaine oli tehokas verenpainetaudin hoidon ohjaaja, sillä kotipaineeseen ja ambulatoriseen paineeseen, jota on pidetty verenpainemittauksen ”kultaisena standardina”, perustuva lääkehoidon ohjaus johti yhtä hyvään verenpaineen hallintaan. Tämän ja aikaisempien tutkimusten tulosten pohjalta voidaan todeta, että verenpaineen kotimittaus on selkeä parannus perinteiseen vastaanotolla tapahtuvaan verenpainemittaukseen verrattuna. Verenpaineen kotimittaus on käytännöllinen, tarkka ja laajasti saatavilla oleva menetelmä, josta voi tulla jopa ensisijainen vaihtoehto verenpainetautia diagnosoitaessa ja hoitaessa. Verenpaineen mittauskäytäntöön tarvitaan muutos, sillä näyttöön perustuvan lääketieteen perusteella vaikuttaa, että vastaanotolla tapahtuvaa verenpainemittausta tulisi käyttää vain seulontatarkoitukseen.
Resumo:
Where users are interacting in a distributed virtual environment, the actions of each user must be observed by peers with sufficient consistency and within a limited delay so as not to be detrimental to the interaction. The consistency control issue may be split into three parts: update control; consistent enactment and evolution of events; and causal consistency. The delay in the presentation of events, termed latency, is primarily dependent on the network propagation delay and the consistency control algorithms. The latency induced by the consistency control algorithm, in particular causal ordering, is proportional to the number of participants. This paper describes how the effect of network delays may be reduced and introduces a scalable solution that provides sufficient consistency control while minimising its effect on latency. The principles described have been developed at Reading over the past five years. Similar principles are now emerging in the simulation community through the HLA standard. This paper attempts to validate the suggested principles within the schema of distributed simulation and virtual environments and to compare and contrast with those described by the HLA definition documents.
Resumo:
As part of a large European coastal operational oceanography project (ECOOP), we have developed a web portal for the display and comparison of model and in situ marine data. The distributed model and in situ datasets are accessed via an Open Geospatial Consortium Web Map Service (WMS) and Web Feature Service (WFS) respectively. These services were developed independently and readily integrated for the purposes of the ECOOP project, illustrating the ease of interoperability resulting from adherence to international standards. The key feature of the portal is the ability to display co-plotted timeseries of the in situ and model data and the quantification of misfits between the two. By using standards-based web technology we allow the user to quickly and easily explore over twenty model data feeds and compare these with dozens of in situ data feeds without being concerned with the low level details of differing file formats or the physical location of the data. Scientific and operational benefits to this work include model validation, quality control of observations, data assimilation and decision support in near real time. In these areas it is essential to be able to bring different data streams together from often disparate locations.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Linear Matrix Inequalities (LMIs) is a powerful too] that has been used in many areas ranging from control engineering to system identification and structural design. There are many factors that make LMI appealing. One is the fact that a lot of design specifications and constrains can be formulated as LMIs [1]. Once formulated in terms of LMIs a problem can be solved efficiently by convex optimization algorithms. The basic idea of the LMI method is to formulate a given problem as an optimization problem with linear objective function and linear matrix inequalities constrains. An intelligent structure involves distributed sensors and actuators and a control law to apply localized actions, in order to minimize or reduce the response at selected conditions. The objective of this work is to implement techniques of control based on LMIs applied to smart structures.
Resumo:
This paper proposes a new methodology to control the power flow between a distributed generator (DG) and the electrical power distribution grid. It is used the droop voltage control to manage the active and reactive power. Through this control a sinusoidal voltage reference is generated to be tracked by voltage loop and this loop generates the current reference for the current loop. The proposed control introduces feed-forward states improving the control performance in order to obtain high quality for the current injected to the grid. The controllers were obtained through the linear matrix inequalities (LMI) using the D-stability analysis to allocate the closed-loop controller poles. Therefore, the results show quick transient response with low oscillations. Thus, this paper presents the proposed control technique, the main simulation results and a prototype with 1000VA was developed in the laboratory in order to demonstrate the feasibility of the proposed control. © 2012 IEEE.
Resumo:
This work focuses basically on the design and analysis of simple and low cost hardware systems efficiency for temperature measurement in agricultural area. The main objective is to prove quantitatively, through statistical data analysis, to what extent a simple hardware designed with inexpensive components can be used safely in the indoor temperature measurement in farm buildings, such as greenhouses, warehouse or silos. To verify the of simple hardware efficiency, its data were compared with data from measurements with a high performance LabVIEW platform. This work proved that a simple hardware based on a microcontroller and the LM35 sensor can perform well. It presented a good accuracy but a relatively low precision that can be improved when performed some consecutive signal sampling and then used its average value. Although there are many papers that explain these components, this work has the distinction of presenting a data analysis in numerical form and using high performance systems to ensure critical data comparison.
Resumo:
The Distributed Software Development (DSD) is a development strategy that meets the globalization needs concerned with the increase productivity and cost reduction. However, the temporal distance, geographical dispersion and the socio-cultural differences, increased some challenges and, especially, added new requirements related with the communication, coordination and control of projects. Among these new demands there is the necessity of a software process that provides adequate support to the distributed software development. This paper presents an integrated approach of software development and test that considers distributed teams peculiarities. The approach purpose is to offer support to DSD, providing a better project visibility, improving the communication between the development and test teams, minimizing the ambiguity and difficulty to understand the artifacts and activities. This integrated approach was conceived based on four pillars: (i) to identify the DSD peculiarities concerned with development and test processes, (ii) to define the necessary elements to compose the integrated approach of development and test to support the distributed teams, (iii) to describe and specify the workflows, artifacts, and roles of the approach, and (iv) to represent appropriately the approach to enable the effective communication and understanding of it.
Resumo:
Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.
Resumo:
The problem of fairly distributing the capacity of a network among a set of sessions has been widely studied. In this problem, each session connects via a single path a source and a destination, and its goal is to maximize its assigned transmission rate (i.e., its throughput). Since the links of the network have limited bandwidths, some criterion has to be defined to fairly distribute their capacity among the sessions. A popular criterion is max-min fairness that, in short, guarantees that each session i gets a rate λi such that no session s can increase λs without causing another session s' to end up with a rate λs/ <; λs. Many max-min fair algorithms have been proposed, both centralized and distributed. However, to our knowledge, all proposed distributed algorithms require control data being continuously transmitted to recompute the max-min fair rates when needed (because none of them has mechanisms to detect convergence to the max-min fair rates). In this paper we propose B-Neck, a distributed max-min fair algorithm that is also quiescent. This means that, in absence of changes (i.e., session arrivals or departures), once the max min rates have been computed, B-Neck stops generating network traffic. Quiescence is a key design concept of B-Neck, because B-Neck routers are capable of detecting and notifying changes in the convergence conditions of max-min fair rates. As far as we know, B-Neck is the first distributed max-min fair algorithm that does not require a continuous injection of control traffic to compute the rates. The correctness of B-Neck is formally proved, and extensive simulations are conducted. In them, it is shown that B-Neck converges relatively fast and behaves nicely in presence of sessions arriving and departing.
Resumo:
Intelligent transport system (ITS) has large potentials on road safety applications as well as nonsafety applications. One of the big challenges for ITS is on the reliable and cost-effective vehicle communications due to the large quantity of vehicles, high mobility, and bursty traffic from the safety and non-safety applications. In this paper, we investigate the use of dedicated short-range communications (DSRC) for coexisting safety and non-safety applications over infrastructured vehicle networks. The main objective of this work is to improve the scalability of communications for vehicles networks, ensure QoS for safety applications, and leave as much as possible bandwidth for non-safety applications. A two-level adaptive control scheme is proposed to find appropriate message rate and control channel interval for safety applications. Simulation results demonstrated that this adaptive method outperforms the fixed control method under varying number of vehicles. © 2012 Wenyang Guan et al.
Resumo:
Protein carbonyls are widely analysed as a measure of protein oxidation. Several different methods exist for their determination. A previous study had described orders of magnitude variance that existed when protein carbonyls were analysed in a single laboratory by ELISA using different commercial kits. We have further explored the potential causes of variance in carbonyl analysis in a ring study. A soluble protein fraction was prepared from rat liver and exposed to 0, 5 and 15 min of UV irradiation. Lyophilised preparations were distributed to six different laboratories that routinely undertook protein carbonyl analysis across Europe. ELISA and Western blotting techniques detected an increase in protein carbonyl formation between 0 and 5 min of UV irradiation irrespective of method used. After irradiation for 15 min, less oxidation was detected by half of the laboratories than after 5 min irradiation. Three of the four ELISA carbonyl results fell within 95% confidence intervals. Likely errors in calculating absolute carbonyl values may be attributed to differences in standardisation. Out of up to 88 proteins identified as containing carbonyl groups after tryptic cleavage of irradiated and control liver proteins, only seven were common in all three liver preparations. Lysine and arginine residues modified by carbonyls are likely to be resistant to tryptic proteolysis. Use of a cocktail of proteases may increase the recovery of oxidised peptides. In conclusion, standardisation is critical for carbonyl analysis and heavily oxidised proteins may not be effectively analysed by any existing technique.