868 resultados para ostoprosessi, ajankäyttö, time-based management
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
At present, in large precast concrete enterprises, the management over precast concrete component has been chaotic. Most enterprises take labor-intensive manual input method, which is time consuming and laborious, and error-prone. Some other slightly better enterprises choose to manage through bar-code or printing serial number manually. However, on one hand, this is also labor-intensive, on the other hand, this method is limited by external environment, making the serial number blur or even lost, and also causes a big problem on production traceability and quality accountability. Therefore, to realize the enterprise’s own rapid development and cater to the needs of the time, to achieve the automated production management has been a big problem for a modern enterprise. In order to solve the problem, inefficiency in production and traceability of the products, this thesis try to introduce RFID technology into the production of PHC tubular pile. By designing a production management system of precast concrete components, the enterprise will achieve the control of the entire production process, and realize the informatization of enterprise production management. RFID technology has been widely used in many fields like entrance control, charge management, logistics and so on. RFID technology will adopt passive RFID tag, which is waterproof, shockproof, anti-interference, so it’s suitable for the actual working environment. The tag will be bound to the precast component steel cage (the structure of the PHC tubular pile before the concrete placement), which means each PHC tubular pile will have a unique ID number. Then according to the production procedure, the precast component will be performed with a series of actions, put the steel cage into the mold, mold clamping, pouring concrete (feed), stretching, centrifugalizing, maintenance, mold removing, welding splice. In every session of the procedure, the information of the precast components can be read through a RFID reader. Using a portable smart device connected to the database, the user can check, inquire and management the production information conveniently. Also, the system can trace the production parameter and the person in charge, realize the traceability of the information. This system can overcome the disadvantages in precast components manufacturers, like inefficiency, error-prone, time consuming, labor intensity, low information relevance and so on. This system can help to improve the production management efficiency, and can produce a good economic and social benefits, so, this system has a certain practical value.
Resumo:
Persistent daily congestion has been increasing in recent years, particularly along major corridors during selected periods in the mornings and evenings. On certain segments, these roadways are often at or near capacity. However, a conventional Predefined control strategy did not fit the demands that changed over time, making it necessary to implement the various dynamical lane management strategies discussed in this thesis. Those strategies include hard shoulder running, reversible HOV lanes, dynamic tolls and variable speed limit. A mesoscopic agent-based DTA model is used to simulate different strategies and scenarios. From the analyses, all strategies aim to mitigate congestion in terms of the average speed and average density. The largest improvement can be found in hard shoulder running and reversible HOV lanes while the other two provide more stable traffic. In terms of average speed and travel time, hard shoulder running is the most congested strategy for I-270 to help relieve the traffic pressure.
Resumo:
This paper explores the role of information and communication technologies in managing risk and early discharge patients, and suggests innovative actions in the area of E-Health services. Treatments of chronic illnesses, or treatments of special needs such as cardiovascular diseases, are conducted in long-stay hospitals, and in some cases, in the homes of patients with a follow-up from primary care centre. The evolution of this model is following a clear trend: trying to reduce the time and the number of visits by patients to health centres and derive tasks, so far as possible, toward outpatient care. Also the number of Early Discharge Patients (EDP) is growing, thus permiting a saving in the resources of the care center. The adequacy of agent and mobile technologies is assessed in light of the particular requirements of health care applications. A software system architecture is outlined and discussed. The major contributions are: first, the conceptualization of multiple mobile and desktop devices as part of a single distributed computing system where software agents are being executed and interact from their remote locations. Second, the use of distributed decision making in multiagent systems, as a means to integrate remote evidence and knowledge obtained from data that is being collected and/or processed by distributed devices. The system will be applied to patients with cardiovascular or Chronic Obstructive Pulmonary Diseases (COPD) as well as to ambulatory surgery patients. The proposed system will allow to transmit the patient's location and some information about his/her illness to the hospital or care centre
Resumo:
Waiting time at an intensive care unity stands for a key feature in the assessment of healthcare quality. Nevertheless, its estimation is a difficult task, not only due to the different factors with intricate relations among them, but also with respect to the available data, which may be incomplete, self-contradictory or even unknown. However, its prediction not only improves the patients’ satisfaction but also enhance the quality of the healthcare being provided. To fulfill this goal, this work aims at the development of a decision support system that allows one to predict how long a patient should remain at an emergency unit, having into consideration all the remarks that were just stated above. It is built on top of a Logic Programming approach to knowledge representation and reasoning, complemented with a Case Base approach to computing.
Resumo:
This manuscript reports the overall development of a Ph.D. research project during the “Mechanics and advanced engineering sciences” course at the Department of Industrial Engineering of the University of Bologna. The project is focused on the development of a combustion control system for an innovative Spark Ignited engine layout. In details, the controller is oriented to manage a prototypal engine equipped with a Port Water Injection system. The water injection technology allows an increment of combustion efficiency due to the knock mitigation effect that permits to keep the combustion phasing closer to the optimal position with respect to the traditional layout. At the beginning of the project, the effects and the possible benefits achievable by water injection have been investigated by a focused experimental campaign. Then the data obtained by combustion analysis have been processed to design a control-oriented combustion model. The model identifies the correlation between Spark Advance, combustion phasing and injected water mass, and two different strategies are presented, both based on an analytic and semi-empirical approach and therefore compatible with a real-time application. The model has been implemented in a combustion controller that manages water injection to reach the best achievable combustion efficiency while keeping knock levels under a pre-established threshold. Three different versions of the algorithm are described in detail. This controller has been designed and pre-calibrated in a software-in-the-loop environment and later an experimental validation has been performed with a rapid control prototyping approach to highlight the performance of the system on real set-up. To further make the strategy implementable on an onboard application, an estimation algorithm of combustion phasing, necessary for the controller, has been developed during the last phase of the PhD Course, based on accelerometric signals.
Resumo:
Garlic is a spice and a medicinal plant; hence, there is an increasing interest in 'developing' new varieties with different culinary properties or with high content of nutraceutical compounds. Phenotypic traits and dominant molecular markers are predominantly used to evaluate the genetic diversity of garlic clones. However, 24 SSR markers (codominant) specific for garlic are available in the literature, fostering germplasm researches. In this study, we genotyped 130 garlic accessions from Brazil and abroad using 17 polymorphic SSR markers to assess the genetic diversity and structure. This is the first attempt to evaluate a large set of accessions maintained by Brazilian institutions. A high level of redundancy was detected in the collection (50 % of the accessions represented eight haplotypes). However, non-redundant accessions presented high genetic diversity. We detected on average five alleles per locus, Shannon index of 1.2, HO of 0.5, and HE of 0.6. A core collection was set with 17 accessions, covering 100 % of the alleles with minimum redundancy. Overall FST and D values indicate a strong genetic structure within accessions. Two major groups identified by both model-based (Bayesian approach) and hierarchical clustering (UPGMA dendrogram) techniques were coherent with the classification of accessions according to maturity time (growth cycle): early-late and midseason accessions. Assessing genetic diversity and structure of garlic collections is the first step towards an efficient management and conservation of accessions in genebanks, as well as to advance future genetic studies and improvement of garlic worldwide.
Resumo:
Background: Microarray techniques have become an important tool to the investigation of genetic relationships and the assignment of different phenotypes. Since microarrays are still very expensive, most of the experiments are performed with small samples. This paper introduces a method to quantify dependency between data series composed of few sample points. The method is used to construct gene co-expression subnetworks of highly significant edges. Results: The results shown here are for an adapted subset of a Saccharomyces cerevisiae gene expression data set with low temporal resolution and poor statistics. The method reveals common transcription factors with a high confidence level and allows the construction of subnetworks with high biological relevance that reveals characteristic features of the processes driving the organism adaptations to specific environmental conditions. Conclusion: Our method allows a reliable and sophisticated analysis of microarray data even under severe constraints. The utilization of systems biology improves the biologists ability to elucidate the mechanisms underlying celular processes and to formulate new hypotheses.
Resumo:
Background: Hepatitis C virus (HCV) genotyping is the most significant predictor of the response to antiviral therapy. The aim of this study was to develop and evaluate a novel real-time PCR method for HCV genotyping based on the NS5B region. Methodology/Principal Findings: Two triplex reaction sets were designed, one to detect genotypes 1a, 1b and 3a; and another to detect genotypes 2a, 2b, and 2c. This approach had an overall sensitivity of 97.0%, detecting 295 of the 304 tested samples. All samples genotyped by real-time PCR had the same type that was assigned using LiPA version 1 (Line in Probe Assay). Although LiPA v. 1 was not able to subtype 68 of the 295 samples (23.0%) and rendered different subtype results from those assigned by real-time PCR for 12/295 samples (4.0%), NS5B sequencing and real-time PCR results agreed in all 146 tested cases. Analytical sensitivity of the real-time PCR assay was determined by end-point dilution of the 5000 IU/ml member of the OptiQuant HCV RNA panel. The lower limit of detection was estimated to be 125 IU/ml for genotype 3a, 250 IU/ml for genotypes 1b and 2b, and 500 IU/ml for genotype 1a. Conclusions/Significance: The total time required for performing this assay was two hours, compared to four hours required for LiPA v. 1 after PCR-amplification. Furthermore, the estimated reaction cost was nine times lower than that of available commercial methods in Brazil. Thus, we have developed an efficient, feasible, and affordable method for HCV genotype identification.
Resumo:
This paper presents SMarty, a variability management approach for UML-based software product lines (PL). SMarty is supported by a UML profile, the SMartyProfile, and a process for managing variabilities, the SMartyProcess. SMartyProfile aims at representing variabilities, variation points, and variants in UML models by applying a set of stereotypes. SMartyProcess consists of a set of activities that is systematically executed to trace, identify, and control variabilities in a PL based on SMarty. It also identifies variability implementation mechanisms and analyzes specific product configurations. In addition, a more comprehensive application of SMarty is presented using SEI's Arcade Game Maker PL. An evaluation of SMarty and related work are discussed.
Resumo:
In this study the hypothesis that interceptive movements are controlled on the basis of expectancy of time to target arrival was tested. The study was conducted through assessment of temporal errors and kinematics of interceptive movements to a moving virtual target. Initial target velocity was kept unchanged in part of the trials, and in the others it was decreased 300 ms before the due time of target arrival at the interception position, increasing in 100 ms time to target arrival. Different probabilities of velocity decrease ranging from 25 to 100% were compared. The results revealed that while there were increasing errors between probabilities of 25 and 75% for unchanged target velocity, the opposite relationship was observed for target velocity decrease. Kinematic analysis indicated that movement timing adjustments to target velocity decrease were made online. These results support the conception that visuomotor integration in the interception of moving targets is mediated by an internal forward model whose weights can be flexibly adjusted according to expectancy of time to target arrival.
Resumo:
A technique for improving the performance of an OSNR monitor based on a polarisation nulling method with the downhill simplex algorithm is demonstrated. Establishing a compromise between accuracy and acquisition time, the monitor has been calibrated to 0.72 dB/390 ms and 0.98 dB/320 ms, over a range of nearly 21 dB. As far as is known, these are the best values achieved with such an OSNR monitoring method.
Resumo:
The goal of this paper is to study and propose a new technique for noise reduction used during the reconstruction of speech signals, particularly for biomedical applications. The proposed method is based on Kalman filtering in the time domain combined with spectral subtraction. Comparison with discrete Kalman filter in the frequency domain shows better performance of the proposed technique. The performance is evaluated by using the segmental signal-to-noise ratio and the Itakura-Saito`s distance. Results have shown that Kalman`s filter in time combined with spectral subtraction is more robust and efficient, improving the Itakura-Saito`s distance by up to four times. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The health sector requires continuous investments to ensure the improvement of products and services from a technological standpoint, the use of new materials, equipment and tools, and the application of process management methods. Methods associated with the process management approach, such as the development of reference models of business processes, can provide significant innovations in the health sector and respond to the current market trend for modern management in this sector (Gunderman et al. (2008) [4]). This article proposes a process model for diagnostic medical X-ray imaging, from which it derives a primary reference model and describes how this information leads to gains in quality and improvements. (C) 2010 Elsevier Ireland Ltd. All rights reserved.
Resumo:
This paper presents a reliability-based analysis for calculating critical tool life in machining processes. It is possible to determine the running time for each tool involved in the process by obtaining the operations sequence for the machining procedure. Usually, the reliability of an operation depends on three independent factors: operator, machine-tool and cutting tool. The reliability of a part manufacturing process is mainly determined by the cutting time for each job and by the sequence of operations, defined by the series configuration. An algorithm is presented to define when the cutting tool must be changed. The proposed algorithm is used to evaluate the reliability of a manufacturing process composed of turning and drilling operations. The reliability of the turning operation is modeled based on data presented in the literature, and from experimental results, a statistical distribution of drilling tool wear was defined, and the reliability of the drilling process was modeled. (C) 2010 Elsevier Ltd. All rights reserved.