901 resultados para travel time reliability
Resumo:
If a regenerative process is represented as semi-regenerative, we derive formulae enabling us to calculate basic characteristics associated with the first occurrence time starting from corresponding characteristics for the semi-regenerative process. Recursive equations, integral equations, and Monte-Carlo algorithms are proposed for practical solving of the problem.
Resumo:
Background: Summarised retinal vessel diameters are linked to systemic vascular pathology. Monochromatic images provide best contrast to measure vessel calibres. However, when obtaining images with a dual wavelength oximeter the red-free image can be extracted as the green channel information only which in turn will reduce the number of photographs taken at a given time. This will reduce patient exposure to the camera flash and could provide sufficient quality images to reliably measure vessel calibres. Methods: We obtained retinal images of one eye of 45 healthy participants. Central retinal arteriolar and central retinal venular equivalents (CRAE and CRVE, respectively) were measured using semi-automated software from two monochromatic images: one taken with a red-free filter and one extracted from the green channel of a dual wavelength oximetry image. Results: Participants were aged between 21 and 62 years, all were normotensive (SBP: 115 (12) mmHg; DBP: 72 (10) mmHg) and had normal intra-ocular pressures (12 (3) mmHg). Bland-Altman analysis revealed good agreement of CRAE and CRVE as obtained from both images (mean bias CRAE = 0.88; CRVE = 2.82). Conclusions: Summarised retinal vessel calibre measurements obtained from oximetry images are in good agreement to those obtained using red-free photographs.
Resumo:
Rework strategies that involve different checking points as well as rework times can be applied into reconfigurable manufacturing system (RMS) with certain constraints, and effective rework strategy can significantly improve the mission reliability of manufacturing process. The mission reliability of process is a measurement of production ability of RMS, which serves as an integrated performance indicator of the production process under specified technical constraints, including time, cost and quality. To quantitatively characterize the mission reliability and basic reliability of RMS under different rework strategies, rework model of RMS was established based on the method of Logistic regression. Firstly, the functional relationship between capability and work load of manufacturing process was studied through statistically analyzing a large number of historical data obtained in actual machining processes. Secondly, the output, mission reliability and unit cost in different rework paths were calculated and taken as the decision variables based on different input quantities and the rework model mentioned above. Thirdly, optimal rework strategies for different input quantities were determined by calculating the weighted decision values and analyzing advantages and disadvantages of each rework strategy. At last, case application were demonstrated to prove the efficiency of the proposed method.
Resumo:
It is a crucial task to evaluate the reliability of manufacturing process in product development process. Process reliability is a measurement of production ability of reconfigurable manufacturing system (RMS), which serves as an integrated performance indicator of the production process under specified technical constraints, including time, cost and quality. An integration framework of manufacturing process reliability evaluation is presented together with product development process. A mathematical model and algorithm based on universal generating function (UGF) is developed for calculating the reliability of manufacturing process with respect to task intensity and process capacity, which are both independent random variables. The rework strategies of RMS are analyzed under different task intensity based on process reliability is presented, and the optimization of rework strategies based on process reliability is discussed afterwards.
Resumo:
This paper presents a novel real-time power-device temperature estimation method that monitors the power MOSFET's junction temperature shift arising from thermal aging effects and incorporates the updated electrothermal models of power modules into digital controllers. Currently, the real-time estimator is emerging as an important tool for active control of device junction temperature as well as online health monitoring for power electronic systems, but its thermal model fails to address the device's ongoing degradation. Because of a mismatch of coefficients of thermal expansion between layers of power devices, repetitive thermal cycling will cause cracks, voids, and even delamination within the device components, particularly in the solder and thermal grease layers. Consequently, the thermal resistance of power devices will increase, making it possible to use thermal resistance (and junction temperature) as key indicators for condition monitoring and control purposes. In this paper, the predicted device temperature via threshold voltage measurements is compared with the real-time estimated ones, and the difference is attributed to the aging of the device. The thermal models in digital controllers are frequently updated to correct the shift caused by thermal aging effects. Experimental results on three power MOSFETs confirm that the proposed methodologies are effective to incorporate the thermal aging effects in the power-device temperature estimator with good accuracy. The developed adaptive technologies can be applied to other power devices such as IGBTs and SiC MOSFETs, and have significant economic implications.
Resumo:
Cardiac troponin I (cTnI) is one of the most useful serum marker test for the determination of myocardial infarction (MI). The first commercial assay of cTnI was released for medical use in the United States and Europe in 1995. It is useful in determining if the source of chest pains, whose etiology may be unknown, is cardiac related. Cardiac TnI is released into the bloodstream following myocardial necrosis (cardiac cell death) as a result of an infarct (heart attack). In this research project the utility of cardiac troponin I as a potential marker for the determination of time of death is investigated. The approach of this research is not to investigate cTnI degradation in serum/plasma, but to investigate the proteolytic breakdown of this protein in heart tissue postmortem. If our hypothesis is correct, cTnI might show a distinctive temporal degradation profile after death. This temporal profile may have potential as a time of death marker in forensic medicine. The field of time of death markers has lagged behind the great advances in technology since the late 1850's. Today medical examiners are using rudimentary time of death markers that offer limited reliability in the medico-legal arena. Cardiac TnI must be stabilized in order to avoid further degradation by proteases in the extraction process. Chemically derivatized magnetic microparticles were covalently linked to anti-cTnI monoclonal antibodies. A charge capture approach was also used to eliminate the antibody from the magnetic microparticles given the negative charge on the microparticles. The magnetic microparticles were used to extract cTnI from heart tissue homogenate for further bio-analysis. Cardiac TnI was eluted from the beads with a buffer and analyzed. This technique exploits banding pattern on sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) followed by a western blot transfer to polyvinylidene fluoride (PVDF) paper for probing with anti-cTnI monoclonal antibodies. Bovine hearts were used as a model to establish the relationship of time of death and concentration/band-pattern given its homology to human cardiac TnI. The final concept feasibility was tested with human heart samples from cadavers with known time of death. ^
Resumo:
The use of canines as a method of detection of explosives is well established worldwide and those applying this technology range from police forces and law enforcement to humanitarian agencies in the developing world. Despite the recent surge in publication of novel instrumental sensors for explosives detection, canines are still regarded by many to be the most effective real-time field method of explosives detection. However, unlike instrumental methods, currently it is difficult to determine detection levels, perform calibration of the canines' ability or produce scientifically valid quality control checks. Accordingly, amongst increasingly strict requirements regarding forensic evidence admission such as Frye and Daubert, there is a need for better scientific understanding of the process of canine detection. ^ When translated to the field of canine detection, just like any instrumental technique, peer reviewed publication of the reliability, success and error rates, is required for admissibility. Commonly training is focussed towards high explosives such as TNT and Composition 4, and the low explosives such as Black and Smokeless Powders are added often only for completeness. ^ Headspace analyses of explosive samples, performed by Solid Phase Microextraction (SPME) paired with Gas Chromatography - Mass Spectrometry (GC-MS), and Gas Chromatography - Electron Capture Detection (GC-ECD) was conducted, highlighting common odour chemicals. The odour chemicals detected were then presented to previously trained and certified explosives detection canines, and the activity/inactivity of the odour determined through field trials and experiments. ^ It was demonstrated that TNT and cast explosives share a common odour signature, and the same may be said for plasticized explosives such as Composition C-4 and Deta Sheet. Conversely, smokeless powders were demonstrated not to share common odours. An evaluation of the effectiveness of commercially available pseudo aids reported limited success. The implications of the explosive odour studies upon canine training then led to the development of novel inert training aids based upon the active odours determined. ^
Resumo:
The study examines the thought of Yanagita Kunio (1875–1962), an influential Japanese nationalist thinker and a founder of an academic discipline named minzokugaku. The purpose of the study is to bring into light an unredeemed potential of his intellectual and political project as a critique of the way in which modern politics and knowledge systematically suppresses global diversity. The study reads his texts against the backdrop of the modern understanding of space and time and its political and moral implications and traces the historical evolution of his thought that culminates in the establishment of minzokugaku. My reading of Yanagita’s texts draws on three interpretive hypotheses. First, his thought can be interpreted as a critical engagement with John Stuart Mill’s philosophy of history, as he turns Mill’s defense of diversity against Mill’s justification of enlightened despotism in non-Western societies. Second, to counter Mill’s individualistic notion of progressive agency, he turns to a Marxian notion of anthropological space, in which a laboring class makes history by continuously transforming nature, and rehabilitates the common people (jomin) as progressive agents. Third, in addition to the common people, Yanagita integrates wandering people as a countervailing force to the innate parochialism and conservatism of agrarian civilization. To excavate the unrecorded history of ordinary farmers and wandering people and promote the formation of national consciousness, his minzokugaku adopts travel as an alternative method for knowledge production and political education. In light of this interpretation, the aim of Yanagita’s intellectual and political project can be understood as defense and critique of the Enlightenment tradition. Intellectually, he attempts to navigate between spurious universalism and reactionary particularism by revaluing diversity as a necessary condition for universal knowledge and human progress. Politically, his minzokugaku aims at nation-building/globalization from below by tracing back the history of a migratory process cutting across the existing boundaries. His project is opposed to nation-building from above that aims to integrate the world population into international society at the expense of global diversity.
Resumo:
Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^
Resumo:
Two tourism-oriented travel samples were drawn from recent time periods that represented economic growth (expansion) and recession cycles in the O: S. economy. Analysis suggests that during the recession period, a greater percentage of theme park visitors chose to travel by air. Second, theme park travelers were more likely to visit friends or fami4 during the recession period. Third, recession theme park travelers were 10 years older, on the average, than their rapid growth counterparts. The average age difference of theme park visitors was found to be significantly different during cyclical economic periods. Research findings support the need for additional studies that segment using generational markets
Resumo:
The author attempts to provide a definition of travel by comparing it with the instinctive migration of animals and birds and viewing its changes over time. As a study of motion voluntarily undertaken, a history of travel can contribute to a better understanding of human beings
Resumo:
In the discussion - Travel Marketing: Industry Relationships and Benefits - by Andrew Vladimir, Visiting Assistant Professor, School of Hospitality Management at Florida International University, the author initially states: “A symbiotic relationship exists among the various segments of the travel and tourism industry. The author has solicited the thinking of 37experts and leaders in the field in a book dealing with these relationships and how they can be developed to benefit the industry. This article provides some salient points from those contributors.” This article could be considered a primer on networking for the hospitality industry. It has everything to do with marketing and the relationships between varied systems in the field of travel and tourism. Vladimir points to instances of success and failure in marketing for the industry at large. And there are points of view from thirty-seven contributing sources here. “Miami Beach remains a fitting example of a leisure product that has been unable to get its act together,” Vladimir shares a view. “There are some first class hotels, a few good restaurants, alluring beaches, and a splendid convention center, but there is no synergism between them, no real affinity, and so while visitors admire the Fontainebleau Hilton and enjoy the food at Joe's Stone Crabs, the reputation of Miami Beach as a resort remains sullied,” the author makes a point. In describing cohesiveness between exclusive systems, Vladimir says, “If each system can get a better understanding of the inner workings of neighboring related systems, each will ultimately be more successful in achieving its goals.” The article is suggesting that exclusive systems aren’t really exclusive at all; or at least they shouldn’t be. In a word – competition – drives the market, and in order for a property to stay afloat, aggressive marketing integrated with all attendant resources is crucial. “Tisch [Preston Robert Tisch, currently – at the time of this writing - the Postmaster General of the United States and formerly president of Lowe’s Hotels and the New York Visitors and Convention Bureau], in talking about the need for aggressive marketing says: “Never...ever...take anything for granted. Never...not for a moment...think that any product or any place will survive strictly on its own merits.” Vladimir not only sources several knowledgeable representatives in the field of hospitality and tourism, but he also links elements as disparate as real estate, car rental, cruise and airlines, travel agencies and traveler profiles to illustrate his points on marketing integration. In closing, Vladimir quotes the Honorable Donna Tuttle, Undersecretary of Commerce for Travel and Tourism, “Uniting the components of this industry in an effective marketing coalition that can compete on an equal footing with often publicly-owned foreign tourism conglomerates and multi-national consortia must be a high priority as the United States struggles to maintain and expand its share of a rapidly changing global market.”
Resumo:
Catering to society's demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. ^ In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research. ^
Resumo:
The adverse health effects of long-term exposure to lead are well established, with major uptake into the human body occurring mainly through oral ingestion by young children. Lead-based paint was frequently used in homes built before 1978, particularly in inner-city areas. Minority populations experience the effects of lead poisoning disproportionately. ^ Lead-based paint abatement is costly. In the United States, residents of about 400,000 homes, occupied by 900,000 young children, lack the means to correct lead-based paint hazards. The magnitude of this problem demands research on affordable methods of hazard control. One method is encapsulation, defined as any covering or coating that acts as a permanent barrier between the lead-based paint surface and the environment. ^ Two encapsulants were tested for reliability and effective life span through an accelerated lifetime experiment that applied stresses exceeding those encountered under normal use conditions. The resulting time-to-failure data were used to extrapolate the failure time under conditions of normal use. Statistical analysis and models of the test data allow forecasting of long-term reliability relative to the 20-year encapsulation requirement. Typical housing material specimens simulating walls and doors coated with lead-based paint were overstressed before encapsulation. A second, un-aged set was also tested. Specimens were monitored after the stress test with a surface chemical testing pad to identify the presence of lead breaking through the encapsulant. ^ Graphical analysis proposed by Shapiro and Meeker and the general log-linear model developed by Cox were used to obtain results. Findings for the 80% reliability time to failure varied, with close to 21 years of life under normal use conditions for encapsulant A. The application of product A on the aged gypsum and aged wood substrates yielded slightly lower times. Encapsulant B had an 80% reliable life of 19.78 years. ^ This study reveals that encapsulation technologies can offer safe and effective control of lead-based paint hazards and may be less expensive than other options. The U.S. Department of Health and Human Services and the CDC are committed to eliminating childhood lead poisoning by 2010. This ambitious target is feasible, provided there is an efficient application of innovative technology, a goal to which this study aims to contribute. ^
Resumo:
For the past several decades, we have experienced the tremendous growth, in both scale and scope, of real-time embedded systems, thanks largely to the advances in IC technology. However, the traditional approach to get performance boost by increasing CPU frequency has been a way of past. Researchers from both industry and academia are turning their focus to multi-core architectures for continuous improvement of computing performance. In our research, we seek to develop efficient scheduling algorithms and analysis methods in the design of real-time embedded systems on multi-core platforms. Real-time systems are the ones with the response time as critical as the logical correctness of computational results. In addition, a variety of stringent constraints such as power/energy consumption, peak temperature and reliability are also imposed to these systems. Therefore, real-time scheduling plays a critical role in design of such computing systems at the system level. We started our research by addressing timing constraints for real-time applications on multi-core platforms, and developed both partitioned and semi-partitioned scheduling algorithms to schedule fixed priority, periodic, and hard real-time tasks on multi-core platforms. Then we extended our research by taking temperature constraints into consideration. We developed a closed-form solution to capture temperature dynamics for a given periodic voltage schedule on multi-core platforms, and also developed three methods to check the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research by incorporating the power/energy constraint with thermal awareness into our research problem. We investigated the energy estimation problem on multi-core platforms, and developed a computation efficient method to calculate the energy consumption for a given voltage schedule on a multi-core platform. In this dissertation, we present our research in details and demonstrate the effectiveness and efficiency of our approaches with extensive experimental results.