948 resultados para Data reliability
Resumo:
Rework strategies that involve different checking points as well as rework times can be applied into reconfigurable manufacturing system (RMS) with certain constraints, and effective rework strategy can significantly improve the mission reliability of manufacturing process. The mission reliability of process is a measurement of production ability of RMS, which serves as an integrated performance indicator of the production process under specified technical constraints, including time, cost and quality. To quantitatively characterize the mission reliability and basic reliability of RMS under different rework strategies, rework model of RMS was established based on the method of Logistic regression. Firstly, the functional relationship between capability and work load of manufacturing process was studied through statistically analyzing a large number of historical data obtained in actual machining processes. Secondly, the output, mission reliability and unit cost in different rework paths were calculated and taken as the decision variables based on different input quantities and the rework model mentioned above. Thirdly, optimal rework strategies for different input quantities were determined by calculating the weighted decision values and analyzing advantages and disadvantages of each rework strategy. At last, case application were demonstrated to prove the efficiency of the proposed method.
Resumo:
Computer software plays an important role in business, government, society and sciences. To solve real-world problems, it is very important to measure the quality and reliability in the software development life cycle (SDLC). Software Engineering (SE) is the computing field concerned with designing, developing, implementing, maintaining and modifying software. The present paper gives an overview of the Data Mining (DM) techniques that can be applied to various types of SE data in order to solve the challenges posed by SE tasks such as programming, bug detection, debugging and maintenance. A specific DM software is discussed, namely one of the analytical tools for analyzing data and summarizing the relationships that have been identified. The paper concludes that the proposed techniques of DM within the domain of SE could be well applied in fields such as Customer Relationship Management (CRM), eCommerce and eGovernment. ACM Computing Classification System (1998): H.2.8.
Resumo:
Most pavement design procedures incorporate reliability to account for design inputs-associated uncertainty and variability effect on predicted performance. The load and resistance factor design (LRFD) procedure, which delivers economical section while considering design inputs variability separately, has been recognised as an effective tool to incorporate reliability into design procedures. This paper presents a new reliability-based calibration in LRFD format for a mechanics-based fatigue cracking analysis framework. This paper employs a two-component reliability analysis methodology that utilises a central composite design-based response surface approach and a first-order reliability method. The reliability calibration was achieved based on a number of field pavement sections that have well-documented performance history and high-quality field and laboratory data. The effectiveness of the developed LRFD procedure was evaluated by performing pavement designs of various target reliabilities and design conditions. The result shows an excellent agreement between the target and actual reliabilities. Furthermore, it is clear from the results that more design features need to be included in the reliability calibration to minimise the deviation of the actual reliability from the target reliability.
Resumo:
In recent years, the internet has grown exponentially, and become more complex. This increased complexity potentially introduces more network-level instability. But for any end-to-end internet connection, maintaining the connection's throughput and reliability at a certain level is very important. This is because it can directly affect the connection's normal operation. Therefore, a challenging research task is to improve a network's connection performance by optimizing its throughput and reliability. This dissertation proposed an efficient and reliable transport layer protocol (called concurrent TCP (cTCP)), an extension of the current TCP protocol, to optimize end-to-end connection throughput and enhance end-to-end connection fault tolerance. The proposed cTCP protocol could aggregate multiple paths' bandwidth by supporting concurrent data transfer (CDT) on a single connection. Here concurrent data transfer was defined as the concurrent transfer of data from local hosts to foreign hosts via two or more end-to-end paths. An RTT-Based CDT mechanism, which was based on a path's RTT (Round Trip Time) to optimize CDT performance, was developed for the proposed cTCP protocol. This mechanism primarily included an RTT-Based load distribution and path management scheme, which was used to optimize connections' throughput and reliability. A congestion control and retransmission policy based on RTT was also provided. According to experiment results, under different network conditions, our RTT-Based CDT mechanism could acquire good CDT performance. Finally a CWND-Based CDT mechanism, which was based on a path's CWND (Congestion Window), to optimize CDT performance was introduced. This mechanism primarily included: a CWND-Based load allocation scheme, which assigned corresponding data to paths based on their CWND to achieve aggregate bandwidth; a CWND-Based path management, which was used to optimize connections' fault tolerance; and a congestion control and retransmission management policy, which was similar to regular TCP in its separate path handling. According to corresponding experiment results, this mechanism could acquire near-optimal CDT performance under different network conditions.
Resumo:
The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.
Resumo:
Road pricing has emerged as an effective means of managing road traffic demand while simultaneously raising additional revenues to transportation agencies. Research on the factors that govern travel decisions has shown that user preferences may be a function of the demographic characteristics of the individuals and the perceived trip attributes. However, it is not clear what are the actual trip attributes considered in the travel decision- making process, how these attributes are perceived by travelers, and how the set of trip attributes change as a function of the time of the day or from day to day. In this study, operational Intelligent Transportation Systems (ITS) archives are mined and the aggregated preferences for a priced system are extracted at a fine time aggregation level for an extended number of days. The resulting information is related to corresponding time-varying trip attributes such as travel time, travel time reliability, charged toll, and other parameters. The time-varying user preferences and trip attributes are linked together by means of a binary choice model (Logit) with a linear utility function on trip attributes. The trip attributes weights in the utility function are then dynamically estimated for each time of day by means of an adaptive, limited-memory discrete Kalman filter (ALMF). The relationship between traveler choices and travel time is assessed using different rules to capture the logic that best represents the traveler perception and the effect of the real-time information on the observed preferences. The impact of travel time reliability on traveler choices is investigated considering its multiple definitions. It can be concluded based on the results that using the ALMF algorithm allows a robust estimation of time-varying weights in the utility function at fine time aggregation levels. The high correlations among the trip attributes severely constrain the simultaneous estimation of their weights in the utility function. Despite the data limitations, it is found that, the ALMF algorithm can provide stable estimates of the choice parameters for some periods of the day. Finally, it is found that the daily variation of the user sensitivities for different periods of the day resembles a well-defined normal distribution.
Resumo:
The adverse health effects of long-term exposure to lead are well established, with major uptake into the human body occurring mainly through oral ingestion by young children. Lead-based paint was frequently used in homes built before 1978, particularly in inner-city areas. Minority populations experience the effects of lead poisoning disproportionately. ^ Lead-based paint abatement is costly. In the United States, residents of about 400,000 homes, occupied by 900,000 young children, lack the means to correct lead-based paint hazards. The magnitude of this problem demands research on affordable methods of hazard control. One method is encapsulation, defined as any covering or coating that acts as a permanent barrier between the lead-based paint surface and the environment. ^ Two encapsulants were tested for reliability and effective life span through an accelerated lifetime experiment that applied stresses exceeding those encountered under normal use conditions. The resulting time-to-failure data were used to extrapolate the failure time under conditions of normal use. Statistical analysis and models of the test data allow forecasting of long-term reliability relative to the 20-year encapsulation requirement. Typical housing material specimens simulating walls and doors coated with lead-based paint were overstressed before encapsulation. A second, un-aged set was also tested. Specimens were monitored after the stress test with a surface chemical testing pad to identify the presence of lead breaking through the encapsulant. ^ Graphical analysis proposed by Shapiro and Meeker and the general log-linear model developed by Cox were used to obtain results. Findings for the 80% reliability time to failure varied, with close to 21 years of life under normal use conditions for encapsulant A. The application of product A on the aged gypsum and aged wood substrates yielded slightly lower times. Encapsulant B had an 80% reliable life of 19.78 years. ^ This study reveals that encapsulation technologies can offer safe and effective control of lead-based paint hazards and may be less expensive than other options. The U.S. Department of Health and Human Services and the CDC are committed to eliminating childhood lead poisoning by 2010. This ambitious target is feasible, provided there is an efficient application of innovative technology, a goal to which this study aims to contribute. ^
Resumo:
We present the first ecosystem-scale methane flux data from a northern Siberian tundra ecosystem covering the entire snow-free period from spring thaw until initial freeze-back. Eddy covariance measurements of methane emission were carried out from the beginning of June until the end of September in the southern central part of the Lena River Delta (72°22' N, 126°30' E). The study site is located in the zone of continuous permafrost and is characterized by Arctic continental climate with very low precipitation and a mean annual temperature of -14.7°C. We found relatively low fluxes of on average 18.7 mg/m**2/d, which we consider to be because of (1) extremely cold permafrost, (2) substrate limitation of the methanogenic archaea, and (3) a relatively high surface coverage of noninundated, moderately moist areas. Near-surface turbulence as measured by the eddy covariance system in 4 m above the ground surface was identified as the most important control on ecosystem-scale methane emission and explained about 60% of the variance in emissions, while soil temperature explained only 8%. In addition, atmospheric pressure was found to significantly improve an exponential model based on turbulence and soil temperature. Ebullition from waterlogged areas triggered by decreasing atmospheric pressure and near-surface turbulence is thought to be an important pathway that warrants more attention in future studies. The close coupling of methane fluxes and atmospheric parameters demonstrated here raises questions regarding the reliability of enclosure-based measurements, which inherently exclude these parameters.
Resumo:
Robust joint modelling is an emerging field of research. Through the advancements in electronic patient healthcare records, the popularly of joint modelling approaches has grown rapidly in recent years providing simultaneous analysis of longitudinal and survival data. This research advances previous work through the development of a novel robust joint modelling methodology for one of the most common types of standard joint models, that which links a linear mixed model with a Cox proportional hazards model. Through t-distributional assumptions, longitudinal outliers are accommodated with their detrimental impact being down weighed and thus providing more efficient and reliable estimates. The robust joint modelling technique and its major benefits are showcased through the analysis of Northern Irish end stage renal disease patients. With an ageing population and growing prevalence of chronic kidney disease within the United Kingdom, there is a pressing demand to investigate the detrimental relationship between the changing haemoglobin levels of haemodialysis patients and their survival. As outliers within the NI renal data were found to have significantly worse survival, identification of outlying individuals through robust joint modelling may aid nephrologists to improve patient's survival. A simulation study was also undertaken to explore the difference between robust and standard joint models in the presence of increasing proportions and extremity of longitudinal outliers. More efficient and reliable estimates were obtained by robust joint models with increasing contrast between the robust and standard joint models when a greater proportion of more extreme outliers are present. Through illustration of the gains in efficiency and reliability of parameters when outliers exist, the potential of robust joint modelling is evident. The research presented in this thesis highlights the benefits and stresses the need to utilise a more robust approach to joint modelling in the presence of longitudinal outliers.
Resumo:
Market research is often conducted through conventional methods such as surveys, focus groups and interviews. But the drawbacks of these methods are that they can be costly and timeconsuming. This study develops a new method, based on a combination of standard techniques like sentiment analysis and normalisation, to conduct market research in a manner that is free and quick. The method can be used in many application-areas, but this study focuses mainly on the veganism market to identify vegan food preferences in the form of a profile. Several food words are identified, along with their distribution between positive and negative sentiments in the profile. Surprisingly, non-vegan foods such as cheese, cake, milk, pizza and chicken dominate the profile, indicating that there is a significant market for vegan-suitable alternatives for such foods. Meanwhile, vegan-suitable foods such as coconut, potato, blueberries, kale and tofu also make strong appearances in the profile. Validation is performed by using the method on Volkswagen vehicle data to identify positive and negative sentiment across five car models. Some results were found to be consistent with sales figures and expert reviews, while others were inconsistent. The reliability of the method is therefore questionable, so the results should be used with caution.
Resumo:
BACKGROUND: The recently developed Context Assessment for Community Health (COACH) tool aims to measure aspects of the local healthcare context perceived to influence knowledge translation in low- and middle-income countries. The tool measures eight dimensions (organizational resources, community engagement, monitoring services for action, sources of knowledge, commitment to work, work culture, leadership, and informal payment) through 49 items. OBJECTIVE: The study aimed to explore the understanding and stability of the COACH tool among health providers in Vietnam. DESIGNS: To investigate the response process, think-aloud interviews were undertaken with five community health workers, six nurses and midwives, and five physicians. Identified problems were classified according to Conrad and Blair's taxonomy and grouped according to an estimation of the magnitude of the problem's effect on the response data. Further, the stability of the tool was examined using a test-retest survey among 77 respondents. The reliability was analyzed for items (intraclass correlation coefficient (ICC) and percent agreement) and dimensions (ICC and Bland-Altman plots). RESULTS: In general, the think-aloud interviews revealed that the COACH tool was perceived as clear, well organized, and easy to answer. Most items were understood as intended. However, seven prominent problems in the items were identified and the content of three dimensions was perceived to be of a sensitive nature. In the test-retest survey, two-thirds of the items and seven of eight dimensions were found to have an ICC agreement ranging from moderate to substantial (0.5-0.7), demonstrating that the instrument has an acceptable level of stability. CONCLUSIONS: This study provides evidence that the Vietnamese translation of the COACH tool is generally perceived to be clear and easy to understand and has acceptable stability. There is, however, a need to rephrase and add generic examples to clarify some items and to further review items with low ICC.
Resumo:
Surface flow types (SFT) are advocated as ecologically relevant hydraulic units, often mapped visually from the bankside to characterise rapidly the physical habitat of rivers. SFT mapping is simple, non-invasive and cost-efficient. However, it is also qualitative, subjective and plagued by difficulties in recording accurately the spatial extent of SFT units. Quantitative validation of the underlying physical habitat parameters is often lacking, and does not consistently differentiate between SFTs. Here, we investigate explicitly the accuracy, reliability and statistical separability of traditionally mapped SFTs as indicators of physical habitat, using independent, hydraulic and topographic data collected during three surveys of a c. 50m reach of the River Arrow, Warwickshire, England. We also explore the potential of a novel remote sensing approach, comprising a small unmanned aerial system (sUAS) and Structure-from-Motion photogrammetry (SfM), as an alternative method of physical habitat characterisation. Our key findings indicate that SFT mapping accuracy is highly variable, with overall mapping accuracy not exceeding 74%. Results from analysis of similarity (ANOSIM) tests found that strong differences did not exist between all SFT pairs. This leads us to question the suitability of SFTs for characterising physical habitat for river science and management applications. In contrast, the sUAS-SfM approach provided high resolution, spatially continuous, spatially explicit, quantitative measurements of water depth and point cloud roughness at the microscale (spatial scales ≤1m). Such data are acquired rapidly, inexpensively, and provide new opportunities for examining the heterogeneity of physical habitat over a range of spatial and temporal scales. Whilst continued refinement of the sUAS-SfM approach is required, we propose that this method offers an opportunity to move away from broad, mesoscale classifications of physical habitat (spatial scales 10-100m), and towards continuous, quantitative measurements of the continuum of hydraulic and geomorphic conditions which actually exists at the microscale.
Resumo:
Power system engineers face a double challenge: to operate electric power systems within narrow stability and security margins, and to maintain high reliability. There is an acute need to better understand the dynamic nature of power systems in order to be prepared for critical situations as they arise. Innovative measurement tools, such as phasor measurement units, can capture not only the slow variation of the voltages and currents but also the underlying oscillations in a power system. Such dynamic data accessibility provides us a strong motivation and a useful tool to explore dynamic-data driven applications in power systems. To fulfill this goal, this dissertation focuses on the following three areas: Developing accurate dynamic load models and updating variable parameters based on the measurement data, applying advanced nonlinear filtering concepts and technologies to real-time identification of power system models, and addressing computational issues by implementing the balanced truncation method. By obtaining more realistic system models, together with timely updated parameters and stochastic influence consideration, we can have an accurate portrait of the ongoing phenomena in an electrical power system. Hence we can further improve state estimation, stability analysis and real-time operation.
Resumo:
Tässä työssä perehdytään soodakattiloiden vesikiertomallin rakentamiseen. Työn päätavoitteena on kehittää simulointimallia varten taulukkolaskentapohja, jonka avulla soodakattilan lämpövuotietoja on yksinkertaista ja nopeaa käsitellä ja siirtää Apros 6 -simulointiohjelmaan. Lisäksi tarkoituksena on pyrkiä automatisoimaan työvaiheet mahdollisimman pitkälle, jolloin vesikiertolaskennan tekeminen yksinkertaistuisi, yhtenäistyisi ja tarkentuisi. Tämä on mahdollista Excel- makrojen ja Apros 6:n uusien toimintojen avulla. Apros 6:ssa on nyt mahdollista hyödyntää SCL- komentotiedostoja, joiden avulla sujuva tiedonsiirto Aproksen ja Excelin välillä vodaan toteuttaa. Vesikiertolaskentaan käytettävän datan käsittely on aikaisemmin ollut työlästä ja sen tarkkuus on pitkälti riippunut mallintajasta. Tässä diplomityössä päästään hyödyntämään uusimpia ja realistisempia soodakattiloiden CFD- malleja, joiden avulla pystytään luomaan aikaisempaa tarkemmat lämpövuojakaumat soodakattilan lämpöpinnoille. Tämä muutos parantaa vesikiertolaskennan tarkkuutta. Työn kokeellisessa osassa uutta Excel laskentatyökalua ja uusia lämpövuoarvoja testataan käytännössä. Eräs vanha Apros- vesikiertomalli päivitetään uusilla lämpövuoarvoilla ja sen rakenteeseen tehdään muutoksia tarkkuuden parantamiseksi. Uuden mallin toimivuutta testataan myös 115 %:n kapasiteetilla ja tutkitaan kuinka kyseinen vesikiertopiiri reagoi suurempaan lämpötehoon. Näitä kolmea eri tilannetta vertaillaan toisiinsa ja tarkastellaan eroavaisuuksia niiden vesi-höyrypiireissä.
Resumo:
Cover title.