878 resultados para improve acoustic performance
Resumo:
Der Kommissionierprozess stellt im Rahmen der innerbetrieblichen Logistik - gerade auch im Hinblick auf Just-In-Time-Lieferungen und Fragen der Produkthaftung - einen zentralen Baustein des Material- und Informationsflusses in Unternehmen dar. Dabei ist die Wahl des Kommissioniersystems ausschlaggebend für die Optimierung der personal- und zeitaufwendigen Kommissioniervorgänge und dient damit zur Leistungssteigerung unter gleichzeitiger Reduzierung der Fehlerquote.
Resumo:
Opportunistic routing (OR) takes advantage of the broadcast nature and spatial diversity of wireless transmission to improve the performance of wireless ad-hoc networks. Instead of using a predetermined path to send packets, OR postpones the choice of the next-hop to the receiver side, and lets the multiple receivers of a packet to coordinate and decide which one will be the forwarder. Existing OR protocols choose the next-hop forwarder based on a predefined candidate list, which is calculated using single network metrics. In this paper, we propose TLG - Topology and Link quality-aware Geographical opportunistic routing protocol. TLG uses multiple network metrics such as network topology, link quality, and geographic location to implement the coordination mechanism of OR. We compare TLG with well-known existing solutions and simulation results show that TLG outperforms others in terms of both QoS and QoE metrics.
Resumo:
Medical instrumentation used in diagnosis and treatment relies on the accurate detection and processing of various physiological events and signals. While signal detection technology has improved greatly in recent years, there remain inherent delays in signal detection/ processing. These delays may have significant negative clinical consequences during various pathophysiological events. Reducing or eliminating such delays would increase the ability to provide successful early intervention in certain disorders thereby increasing the efficacy of treatment. In recent years, a physical phenomenon referred to as Negative Group Delay (NGD), demonstrated in simple electronic circuits, has been shown to temporally advance the detection of analog waveforms. Specifically, the output is temporally advanced relative to the input, as the time delay through the circuit is negative. The circuit output precedes the complete detection of the input signal. This process is referred to as signal advance (SA) detection. An SA circuit model incorporating NGD was designed, developed and tested. It imparts a constant temporal signal advance over a pre-specified spectral range in which the output is almost identical to the input signal (i.e., it has minimal distortion). Certain human patho-electrophysiological events are good candidates for the application of temporally-advanced waveform detection. SA technology has potential in early arrhythmia and epileptic seizure detection and intervention. Demonstrating reliable and consistent temporally advanced detection of electrophysiological waveforms may enable intervention with a pathological event (much) earlier than previously possible. SA detection could also be used to improve the performance of neural computer interfaces, neurotherapy applications, radiation therapy and imaging. In this study, the performance of a single-stage SA circuit model on a variety of constructed input signals, and human ECGs is investigated. The data obtained is used to quantify and characterize the temporal advances and circuit gain, as well as distortions in the output waveforms relative to their inputs. This project combines elements of physics, engineering, signal processing, statistics and electrophysiology. Its success has important consequences for the development of novel interventional methodologies in cardiology and neurophysiology as well as significant potential in a broader range of both biomedical and non-biomedical areas of application.
Resumo:
Recent studies have shown that sulforaphane, a naturally occurring compound that is found in cruciferous vegetables, offers cellular protection in several models of brain injury. When administered following traumatic brain injury (TBI), sulforaphane has been demonstrated to attenuate blood-brain barrier permeability and reduce cerebral edema. These beneficial effects of sulforaphane have been shown to involve induction of a group of cytoprotective, Nrf2-driven genes, whose protein products include free radical scavenging and detoxifying enzymes. However, the influence of sulforaphane on post-injury cognitive deficits has not been examined. In this study, we examined if sulforaphane, when administered following cortical impact injury, can improve the performance of rats tested in hippocampal- and prefrontal cortex-dependent tasks. Our results indicate that sulforaphane treatment improves performance in the Morris water maze task (as indicated by decreased latencies during learning and platform localization during a probe trial) and reduces working memory dysfunction (tested using the delayed match-to-place task). These behavioral improvements were only observed when the treatment was initiated 1h, but not 6h, post-injury. These studies support the use of sulforaphane in the treatment of TBI, and extend the previously observed protective effects to include enhanced cognition.
Resumo:
We propose to build and operate a detector based on the emulsion film technology for the measurement of the gravitational acceleration on antimatter, to be performed by the AEgIS experiment (AD6) at CERN. The goal of AEgIS is to test the weak equivalence principle with a precision of 1% on the gravitational acceleration g by measuring the vertical position of the annihilation vertex of antihydrogen atoms after their free fall while moving horizontally in a vacuum pipe. With the emulsion technology developed at the University of Bern we propose to improve the performance of AEgIS by exploiting the superior position resolution of emulsion films over other particle detectors. The idea is to use a new type of emulsion films, especially developed for applications in vacuum, to yield a spatial resolution of the order of one micron in the measurement of the sag of the antihydrogen atoms in the gravitational field. This is an order of magnitude better than what was planned in the original AEgIS proposal.
Resumo:
Long Term Evolution (LTE) represents the fourth generation (4G) technology which is capable of providing high data rates as well as support of high speed mobility. The EU FP7 Mobile Cloud Networking (MCN) project integrates the use of cloud computing concepts in LTE mobile networks in order to increase LTE's performance. In this way a shared distributed virtualized LTE mobile network is built that can optimize the utilization of virtualized computing, storage and network resources and minimize communication delays. Two important features that can be used in such a virtualized system to improve its performance are the user mobility and bandwidth prediction. This paper introduces the architecture and challenges that are associated with user mobility and bandwidth prediction approaches in virtualized LTE systems.
Resumo:
BACKGROUND Avoidable hospitalizations (AH) are hospital admissions for diseases and conditions that could have been prevented by appropriate ambulatory care. We examine regional variation of AH in Switzerland and the factors that determine AH. METHODS We used hospital service areas, and data from 2008-2010 hospital discharges in Switzerland to examine regional variation in AH. Age and sex standardized AH were the outcome variable, and year of admission, primary care physician density, medical specialist density, rurality, hospital bed density and type of hospital reimbursement system were explanatory variables in our multilevel poisson regression. RESULTS Regional differences in AH were as high as 12-fold. Poisson regression showed significant increase of all AH over time. There was a significantly lower rate of all AH in areas with more primary care physicians. Rates increased in areas with more specialists. Rates of all AH also increased where the proportion of residences in rural communities increased. Regional hospital capacity and type of hospital reimbursement did not have significant associations. Inconsistent patterns of significant determinants were found for disease specific analyses. CONCLUSION The identification of regions with high and low AH rates is a starting point for future studies on unwarranted medical procedures, and may help to reduce their incidence. AH have complex multifactorial origins and this study demonstrates that rurality and physician density are relevant determinants. The results are helpful to improve the performance of the outpatient sector with emphasis on local context. Rural and urban differences in health care delivery remain a cause of concern in Switzerland.
Resumo:
This paper describes a technique to significantly improve upon the mass peak shape and mass resolution of spaceborne quadrupolemass spectrometers (QMSs) through higher order auxiliary excitation of the quadrupole field. Using a novel multiresonant tank circuit, additional frequency components can be used to drive modulating voltages on the quadrupole rods in a practical manner, suitable for both improved commercial applications and spaceflight instruments. Auxiliary excitation at frequencies near twice that of the fundamental quadrupole RF frequency provides the advantages of previously studied parametric excitation techniques, but with the added benefit of increased sensed excitation amplitude dynamic range and the ability to operate voltage scan lines through the center of upper stability islands. Using a field programmable gate array, the amplitudes and frequencies of all QMS signals are digitally generated and managed, providing a robust and stable voltage control system. These techniques are experimentally verified through an interface with a commercial Pfeiffer QMG422 quadrupole rod system. When operating through the center of a stability island formed from higher order auxiliary excitation, approximately 50% and 400% improvements in 1% mass resolution and peak stability were measured, respectively, when compared with traditional QMS operation. Although tested with a circular rod system, the presented techniques have the potential to improve the performance of both circular and hyperbolic rod geometry QMS sensors.
Resumo:
Information-centric networking (ICN) is a new communication paradigm that has been proposed to cope with drawbacks of host-based communication protocols, namely scalability and security. In this thesis, we base our work on Named Data Networking (NDN), which is a popular ICN architecture, and investigate NDN in the context of wireless and mobile ad hoc networks. In a first part, we focus on NDN efficiency (and potential improvements) in wireless environments by investigating NDN in wireless one-hop communication, i.e., without any routing protocols. A basic requirement to initiate informationcentric communication is the knowledge of existing and available content names. Therefore, we develop three opportunistic content discovery algorithms and evaluate them in diverse scenarios for different node densities and content distributions. After content names are known, requesters can retrieve content opportunistically from any neighbor node that provides the content. However, in case of short contact times to content sources, content retrieval may be disrupted. Therefore, we develop a requester application that keeps meta information of disrupted content retrievals and enables resume operations when a new content source has been found. Besides message efficiency, we also evaluate power consumption of information-centric broadcast and unicast communication. Based on our findings, we develop two mechanisms to increase efficiency of information-centric wireless one-hop communication. The first approach called Dynamic Unicast (DU) avoids broadcast communication whenever possible since broadcast transmissions result in more duplicate Data transmissions, lower data rates and higher energy consumption on mobile nodes, which are not interested in overheard Data, compared to unicast communication. Hence, DU uses broadcast communication only until a content source has been found and then retrieves content directly via unicast from the same source. The second approach called RC-NDN targets efficiency of wireless broadcast communication by reducing the number of duplicate Data transmissions. In particular, RC-NDN is a Data encoding scheme for content sources that increases diversity in wireless broadcast transmissions such that multiple concurrent requesters can profit from each others’ (overheard) message transmissions. If requesters and content sources are not in one-hop distance to each other, requests need to be forwarded via multi-hop routing. Therefore, in a second part of this thesis, we investigate information-centric wireless multi-hop communication. First, we consider multi-hop broadcast communication in the context of rather static community networks. We introduce the concept of preferred forwarders, which relay Interest messages slightly faster than non-preferred forwarders to reduce redundant duplicate message transmissions. While this approach works well in static networks, the performance may degrade in mobile networks if preferred forwarders may regularly move away. Thus, to enable routing in mobile ad hoc networks, we extend DU for multi-hop communication. Compared to one-hop communication, multi-hop DU requires efficient path update mechanisms (since multi-hop paths may expire quickly) and new forwarding strategies to maintain NDN benefits (request aggregation and caching) such that only a few messages need to be transmitted over the entire end-to-end path even in case of multiple concurrent requesters. To perform quick retransmission in case of collisions or other transmission errors, we implement and evaluate retransmission timers from related work and compare them to CCNTimer, which is a new algorithm that enables shorter content retrieval times in information-centric wireless multi-hop communication. Yet, in case of intermittent connectivity between requesters and content sources, multi-hop routing protocols may not work because they require continuous end-to-end paths. Therefore, we present agent-based content retrieval (ACR) for delay-tolerant networks. In ACR, requester nodes can delegate content retrieval to mobile agent nodes, which move closer to content sources, can retrieve content and return it to requesters. Thus, ACR exploits the mobility of agent nodes to retrieve content from remote locations. To enable delay-tolerant communication via agents, retrieved content needs to be stored persistently such that requesters can verify its authenticity via original publisher signatures. To achieve this, we develop a persistent caching concept that maintains received popular content in repositories and deletes unpopular content if free space is required. Since our persistent caching concept can complement regular short-term caching in the content store, it can also be used for network caching to store popular delay-tolerant content at edge routers (to reduce network traffic and improve network performance) while real-time traffic can still be maintained and served from the content store.
Resumo:
The PROPELLER (Periodically Rotated Overlapping Parallel Lines with Enhanced Reconstruction) magnetic resonance imaging (MRI) technique has inherent advantages over other fast imaging methods, including robust motion correction, reduced image distortion, and resistance to off-resonance effects. These features make PROPELLER highly desirable for T2*-sensitive imaging, high-resolution diffusion imaging, and many other applications. However, PROPELLER has been predominantly implemented as a fast spin-echo (FSE) technique, which is insensitive to T2* contrast, and requires time-inefficient signal averaging to achieve adequate signal-to-noise ratio (SNR) for many applications. These issues presently constrain the potential clinical utility of FSE-based PROPELLER. ^ In this research, our aim was to extend and enhance the potential applications of PROPELLER MRI by developing a novel multiple gradient echo PROPELLER (MGREP) technique that can overcome the aforementioned limitations. The MGREP pulse sequence was designed to acquire multiple gradient-echo images simultaneously, without any increase in total scan time or RF energy deposition relative to FSE-based PROPELLER. A new parameter was also introduced for direct user-control over gradient echo spacing, to allow variable sensitivity to T2* contrast. In parallel to pulse sequence development, an improved algorithm for motion correction was also developed and evaluated against the established method through extensive simulations. The potential advantages of MGREP over FSE-based PROPELLER were illustrated via three specific applications: (1) quantitative T2* measurement, (2) time-efficient signal averaging, and (3) high-resolution diffusion imaging. Relative to the FSE-PROPELLER method, the MGREP sequence was found to yield quantitative T2* values, increase SNR by ∼40% without any increase in acquisition time or RF energy deposition, and noticeably improve image quality in high-resolution diffusion maps. In addition, the new motion algorithm was found to improve the performance considerably in motion-artifact reduction. ^ Overall, this work demonstrated a number of enhancements and extensions to existing PROPELLER techniques. The new technical capabilities of PROPELLER imaging, developed in this thesis research, are expected to serve as the foundation for further expanding the scope of PROPELLER applications. ^
Resumo:
The 1999-2004 prevalence of chronic kidney disease in adults 20 year or older (15.5 million) is an estimated 7.69%. The risk of developing CKD is exacerbated by diabetes, hypertension and/or a family history of kidney disease. African Americans, Hispanics, Pacific Islanders, Native Americans, and the elderly are more susceptible to higher incidence of CKD. The challenges of aging coupled with co-morbidities such as kidney disease raises the potential for malnutrition among elderly (for the purpose of this study 55 years or older) populations. Lack of adherence to prescribed nutrition guidelines specific to renal failure jeopardizes body homeostasis and increases the likelihood of future morbidity and resultant mortality. The relationship and synergy that exists between diet and disease is evident. Clinical experience with renal patients has indicated the importance of adherence to diet therapy specific to kidney disease. Extension investigation of diet adherence among endstage renal disease patients revealed a sizeable dearth in the current literature. This thesis study was undertaken to help reduce that void. The study design is qualitative and descriptive. Support, cooperation, and collaboration were provided by the University of Texas Nephrology Department, University of Texas Physicians, and DaVita Dialysis Centers. Approximately 105 male and female chronic to end-stage kidney disease patients were approached to participate in elicitation interviews in dialysis treatment facilities regarding their present diet beliefs and practices. Eighty-five were recruited and agreed to participate. Inclusion criteria required individuals to be between 35-90 years of age; capable of completing a 5-10 minute interview; and English speaking. Each kidney patient was asked seven (7) non-leading questions developed from the constructs of the Theory of Planned Behavior. The study presents a descriptive comparison of behavioral, normative, and control beliefs that influence adherence to renal diets by age, race, and gender. The study successfully concluded that behavioral, normative, and control beliefs of chronic to end-stage renal patients promoted execution and adherence to prescribed nutrition. This study provides valuable information for dietitians, technicians, nurses, and physicians to assess patient compliance toward prescribed nutrition and the means to support or improve that performance. ^
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
Multiple guidelines recommend debriefing of actual resuscitations to improve clinical performance. We implemented a novel standardized debriefing program using a Debriefing In Situ Conversation after Emergent Resuscitations Now (DISCERN) tool. Following the development of the evidence-based DISCERN tool, we conducted an observational study of all resuscitations (intubation, CPR, and/or defibrillation) at a pediatric emergency department (ED) over one year. Resuscitation interventions, patient survival, and physician team leader characteristics were analyzed as predictors for debriefing. Each debriefing's participants, time duration, and content were recorded. Thematic content of debriefings was categorized by framework approach into Team Emergency Assessment Measure (TEAM) elements. There were 241 resuscitations and 63 (26%) debriefings. A higher proportion of debriefings occurred after CPR (p<0.001) or ED death (p<0.001). Debriefing participants always included an attending and nurse; the median number of staff roles present was six. Median interval (from resuscitation end to start of debriefing) & debriefing durations were 33 (IQR 15,67) and 10 minutes (IQR 5,12), respectively. Common TEAM themes included co-operation/coordination (30%), communication (22%), and situational awareness (15%). Stated reasons for not debriefing included: unnecessary (78%), time constraints (19%), or other reasons (3%). Debriefings with the DISCERN tool usually involved higher acuity resuscitations, involved most of the indicated personnel, and lasted less than 10 minutes. This qualitative tool could be adapted to other settings. Future studies are needed to evaluate for potential impacts on education, quality improvement programming, and staff emotional well-being.^
Resumo:
In a feasibility study, the potential of proxy data for the temperature and salinity during the Last Glacial Maximum (LGM, about 19 000 to 23 000 years before present) in constraining the strength of the Atlantic meridional overturning circulation (AMOC) with a general ocean circulation model was explored. The proxy data were simulated by drawing data from four different model simulations at the ocean sediment core locations of the Multiproxy Approach for the Reconstruction of the Glacial Ocean surface (MARGO) project, and perturbing these data with realistic noise estimates. The results suggest that our method has the potential to provide estimates of the past strength of the AMOC even from sparse data, but in general, paleo-sea-surface temperature data without additional prior knowledge about the ocean state during the LGM is not adequate to constrain the model. On the one hand, additional data in the deep-ocean and salinity data are shown to be highly important in estimating the LGM circulation. On the other hand, increasing the amount of surface data alone does not appear to be enough for better estimates. Finally, better initial guesses to start the state estimation procedure would greatly improve the performance of the method. Indeed, with a sufficiently good first guess, just the sea-surface temperature data from the MARGO project promise to be sufficient for reliable estimates of the strength of the AMOC.
Resumo:
The objective of the present study is to examine the determinants of ISO 9001 certification, focusing on the effect of Product-related Environmental Regulations on Chemicals (PRERCs) and FDI using the answers to several questions in our Vietnam survey conducted from December 2011 to January 2012. Our findings suggest that PRERCs may help with the improvement in quality control of Vietnamese firms. If Vietnamese manufacturing firms with ISO 9001 certification are more likely to adopt ISO 14001, as well as firms in developed countries, our results indicate that the European chemical regulations may assist in the reduction of various environmental impacts in Vietnam. In addition, we found that FDI promotes the adoption of ISO 9001. If FDI firms in Vietnam certify ISO 14001 after the adoption of ISO 9001, as in the case of Malaysia and the developed economies, FDI firms may also be able to improve environmental performance as a result of ISO 14001.