814 resultados para Load disaggregation algorithm
Resumo:
BACKGROUND: Variables influencing serum hepatitis C virus (HCV) RNA levels and genotype distribution in individuals with human immunodeficiency virus (HIV) infection are not well known, nor are factors determining spontaneous clearance after exposure to HCV in this population. METHODS: All HCV antibody (Ab)-positive patients with HIV infection in the EuroSIDA cohort who had stored samples were tested for serum HCV RNA, and HCV genotyping was done for subjects with viremia. Logistic regression was used to identify variables associated with spontaneous HCV clearance and HCV genotype 1. RESULTS: Of 1940 HCV Ab-positive patients, 1496 (77%) were serum HCV RNA positive. Injection drug users (IDUs) were less likely to have spontaneously cleared HCV than were homosexual men (20% vs. 39%; adjusted odds ratio [aOR], 0.36 [95% confidence interval {CI}, 0.24-0.53]), whereas patients positive for hepatitis B surface antigen (HBsAg) were more likely to have spontaneously cleared HCV than were those negative for HBsAg (43% vs. 21%; aOR, 2.91 [95% CI, 1.94-4.38]). Of patients with HCV viremia, 786 (53%) carried HCV genotype 1, and 53 (4%), 440 (29%), and 217 (15%) carried HCV genotype 2, 3, and 4, respectively. A greater HCV RNA level was associated with a greater chance of being infected with HCV genotype 1 (aOR, 1.60 per 1 log higher [95% CI, 1.36-1.88]). CONCLUSIONS: More than three-quarters of the HIV- and HCV Ab-positive patients in EuroSIDA showed active HCV replication. Viremia was more frequent in IDUs and, conversely, was less common in HBsAg-positive patients. Of the patients with HCV viremia analyzed, 53% were found to carry HCV genotype 1, and this genotype was associated with greater serum HCV RNA levels.
Resumo:
Introduction New evidence from randomized controlled and etiology of fever studies, the availability of reliable RDT for malaria, and novel technologies call for revision of the IMCI strategy. We developed a new algorithm based on (i) a systematic review of published studies assessing the safety and appropriateness of RDT and antibiotic prescription, (ii) results from a clinical and microbiological investigation of febrile children aged <5 years, (iii) international expert IMCI opinions. The aim of this study was to assess the safety of the new algorithm among patients in urban and rural areas of Tanzania.Materials and Methods The design was a controlled noninferiority study. Enrolled children aged 2-59 months with any illness were managed either by a study clinician using the new Almanach algorithm (two intervention health facilities), or clinicians using standard practice, including RDT (two control HF). At day 7 and day 14, all patients were reassessed. Patients who were ill in between or not cured at day 14 were followed until recovery or death. Primary outcome was rate of complications, secondary outcome rate of antibiotic prescriptions.Results 1062 children were recruited. Main diagnoses were URTI 26%, pneumonia 19% and gastroenteritis (9.4%). 98% (531/541) were cured at D14 in the Almanach arm and 99.6% (519/521) in controls. Rate of secondary hospitalization was 0.2% in each. One death occurred in controls. None of the complications was due to withdrawal of antibiotics or antimalarials at day 0. Rate of antibiotic use was 19% in the Almanach arm and 84% in controls.Conclusion Evidence suggests that the new algorithm, primarily aimed at the rational use of drugs, is as safe as standard practice and leads to a drastic reduction of antibiotic use. The Almanach is currently being tested for clinician adherence to proposed procedures when used on paper or a mobile phone
Resumo:
Este trabajo presenta un Algoritmo Genético (GA) del problema de secuenciar unidades en una línea de producción. Se tiene en cuenta la posibilidad de cambiar la secuencia de piezas mediante estaciones con acceso a un almacén intermedio o centralizado. El acceso al almacén además está restringido, debido al tamaño de las piezas.AbstractThis paper presents a Genetic Algorithm (GA) for the problem of sequencing in a mixed model non-permutation flowshop. Resequencingis permitted where stations have access to intermittent or centralized resequencing buffers. The access to a buffer is restricted by the number of available buffer places and the physical size of the products.
Resumo:
Työn tavoitteena oli kehittää automaattinen optimointijärjestelmä energiayhtiön omistamaan pieneen sähkön- ja lämmöntuotantolaitokseen (CHP-laitos). Optimointitarve perustuu energiayhtiön sähkön hankintaan sähköpörssistä, kaasun hankintahintaan, kohteen paikallisiin sähkö- ja lämpökuormituksiin ja muihin laitoksen talouteen vaikuttaviin tekijöihin. Kehitettävällä optimointijärjestelmällä ontarkoitus tulevaisuudessa hallita useita hajautetun energiantuotannon yksiköitäkeskitetysti. Työssä kehitettiin algoritmi, joka optimoi voimalaitoksen taloutta sähkötehoa säätävillä ajomalleilla ja suoralla sähköteho-ohjeella. Työssä kehitetyn algoritmin tuottamia hyötyjä selvitettiin Harjun oppimiskeskuksen CHP-laitoksen mittaushistoriatiedoilla. CHP-laitosten käytön optimointiin luotiin keskitettyyn laskentaan ja hajautettuun ohjaukseen perustuva järjestelmä. Se ohjaa CHP-laitoksia reaaliaikaisesti ja ennustaa historiatietoihin perustuvalla aikasarjamallilla laitoksen tulevaa käyttöä. Optimointijärjestelmän toimivuus ja saatu hyöty selvitettiin Harjun oppimiskeskuksen CHP-laitoksella vertaamalla mittauksista laskettua toteutunutta hyötyä optimointijärjestelmän laskemaan ennustettuun hyötyyn.
Resumo:
On yleisesti tiedossa, että väsyttävän kuormituksen alaisena olevat hitsatut rakenteet rikkoutuvat juuri hitsausliitoksista. Täyden tunkeuman hitsausliitoksia sisältävien rakenteiden asiantunteva suunnittelu janykyaikaiset valmistusmenetelmät ovat lähes eliminoineet väsymisvauriot hitsatuissa rakenteissa. Väsymislujuuden parantaminen tiukalla täyden tunkeuman vaatimuksella on kuitenkin epätaloudellinen ratkaisu. Täyden tunkeuman hitsausliitoksille asetettavien laatuvaatimuksien on määriteltävä selkeät tarkastusohjeet ja hylkäämisperusteet. Tämän diplomityön tarkoituksena oli tutkia geometristen muuttujien vaikutusta kuormaa kantavien hitsausliitosten väsymislujuuteen. Huomio kiinnitettiin pääasiassa suunnittelumuuttujiin, joilla on vaikutusta väsymisvaurioiden syntymiseen hitsauksen juuren puolella. Nykyiset määräykset ja standardit, jotka perustuvat kokeellisiin tuloksiin; antavat melko yleisiä ohjeita hitsausliitosten väsymismitoituksesta. Tämän vuoksi muodostettiin kokonaan uudet parametriset yhtälöt sallitun nimellisen jännityksen kynnysarvon vaihteluvälin, ¿¿th, laskemiseksi, jotta vältettäisiin hitsausliitosten juuren puoleiset väsymisvauriot. Lisäksi, jokaiselle liitostyypille laskettiin hitsin juuren puolen väsymisluokat (FAT), joita verrattiin olemassa olevilla mitoitusohjeilla saavutettuihin tuloksiin. Täydentäviksi referensseiksi suoritettiin useita kolmiulotteisia (3D) analyysejä. Julkaistuja kokeellisiin tuloksiin perustuvia tietoja käytettiin apuna hitsausliitosten väsymiskäyttäytymisen ymmärtämiseksi ja materiaalivakioiden määrittämiseksi. Kuormaa kantavien vajaatunkeumaisten hitsausliitosten väsymislujuus määritettiin käyttämällä elementtimenetelmää. Suurimman pääjännityksen kriteeriä hyödynnettiin murtumiskäyttäytymisen ennakoimiseksi. Valitulle hitsatulle materiaalille ja koeolosuhteille murtumiskäyttäytymistä mallinnettiin särön kasvunopeudella da/dN ja jännitysintensiteettikertoimen vaihteluvälillä, 'K. Paris:n yhtälön numeerinen integrointi suoritettiin FRANC2D/L tietokoneohjelmalla. Saatujen tulosten perusteella voidaan laskea FAT tutkittavassa tapauksessa. ¿¿th laskettiin alkusärön jännitysintensiteettikertoimen vaihteluvälin ja kynnysjännitysintensiteettikertoimen, 'Kth, perusteella. ¿Kth arvoa pienemmällä vaihteluvälillä särö ei kasva. Analyyseissäoletuksena oli hitsattu jälkikäsittelemätön liitos, jossa oli valmis alkusärö hitsin juuressa. Analyysien tulokset ovat hyödyllisiä suunnittelijoille, jotka tekevät päätöksiä koskien geometrisiä parametreja, joilla on vaikutusta hitsausliitosten väsymislujuuteen.
Resumo:
Increasing evidence suggests that working memory and perceptual processes are dynamically interrelated due to modulating activity in overlapping brain networks. However, the direct influence of working memory on the spatio-temporal brain dynamics of behaviorally relevant intervening information remains unclear. To investigate this issue, subjects performed a visual proximity grid perception task under three different visual-spatial working memory (VSWM) load conditions. VSWM load was manipulated by asking subjects to memorize the spatial locations of 6 or 3 disks. The grid was always presented between the encoding and recognition of the disk pattern. As a baseline condition, grid stimuli were presented without a VSWM context. VSWM load altered both perceptual performance and neural networks active during intervening grid encoding. Participants performed faster and more accurately on a challenging perceptual task under high VSWM load as compared to the low load and the baseline condition. Visual evoked potential (VEP) analyses identified changes in the configuration of the underlying sources in one particular period occurring 160-190 ms post-stimulus onset. Source analyses further showed an occipito-parietal down-regulation concurrent to the increased involvement of temporal and frontal resources in the high VSWM context. Together, these data suggest that cognitive control mechanisms supporting working memory may selectively enhance concurrent visual processing related to an independent goal. More broadly, our findings are in line with theoretical models implicating the engagement of frontal regions in synchronizing and optimizing mnemonic and perceptual resources towards multiple goals.
Resumo:
Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange
Resumo:
Adaptació de l'algorisme de Kumar per resoldre sistemes d'equacions amb matrius de Toeplitz sobre els reals a cossos finits en un temps 0 (n log n).
Resumo:
La principal motivació d'aquest treball ha estat implementar l'algoritme Rijndael-AES en un full Sage-math, paquet de software matemàtic de lliure distribució i en actual desenvolupament, aprofitant les seves eines i funcionalitats integrades.
Resumo:
Technological development brings more and more complex systems to the consumer markets. The time required for bringing a new product to market is crucial for the competitive edge of a company. Simulation is used as a tool to model these products and their operation before actual live systems are built. The complexity of these systems can easily require large amounts of memory and computing power. Distributed simulation can be used to meet these demands. Distributed simulation has its problems. Diworse, a distributed simulation environment, was used in this study to analyze the different factors that affect the time required for the simulation of a system. Examples of these factors are the simulation algorithm, communication protocols, partitioning of the problem, distributionof the problem, capabilities of the computing and communications equipment and the external load. Offices offer vast amounts of unused capabilities in the formof idle workstations. The use of this computing power for distributed simulation requires the simulation to adapt to a changing load situation. This requires all or part of the simulation work to be removed from a workstation when the owner wishes to use the workstation again. If load balancing is not performed, the simulation suffers from the workstation's reduced performance, which also hampers the owner's work. Operation of load balancing in Diworse is studied and it is shown to perform better than no load balancing, as well as which different approaches for load balancing are discussed.
Resumo:
Electric motors driven by adjustable-frequency converters may produce periodic excitation forces that can cause torque and speed ripple. Interaction with the driven mechanical system may cause undesirable vibrations that affect the system performance and lifetime. Direct drives in sensitive applications, such as elevators or paper machines, emphasize the importance of smooth torque production. This thesis analyses the non-idealities of frequencyconverters that produce speed and torque ripple in electric drives. The origin of low order harmonics in speed and torque is examined. It is shown how different current measurement error types affect the torque. As the application environment, direct torque control (DTC) method is applied to permanent magnet synchronous machines (PMSM). A simulation model to analyse the effect of the frequency converter non-idealities on the performance of the electric drives is created. Themodel enables to identify potential problems causing torque vibrations and possibly damaging oscillations in electrically driven machine systems. The model is capable of coupling with separate simulation software of complex mechanical loads. Furthermore, the simulation model of the frequency converter's control algorithm can be applied to control a real frequency converter. A commercial frequencyconverter with standard software, a permanent magnet axial flux synchronous motor and a DC motor as the load are used to detect the effect of current measurement errors on load torque. A method to reduce the speed and torque ripple by compensating the current measurement errors is introduced. The method is based on analysing the amplitude of a selected harmonic component of speed as a function oftime and selecting a suitable compensation alternative for the current error. The speed can be either measured or estimated, so the compensation method is applicable also for speed sensorless drives. The proposed compensation method is tested with a laboratory drive, which consists of commercial frequency converter hardware with self-made software and a prototype PMSM. The speed and torque rippleof the test drive are reduced by applying the compensation method. In addition to the direct torque controlled PMSM drives, the compensation method can also beapplied to other motor types and control methods.
Resumo:
To permit the tracking of turbulent flow structures in an Eulerian frame from single-point measurements, we make use of a generalization of conventional two-dimensional quadrant analysis to three-dimensional octants. We characterize flow structures using the sequences of these octants and show how significance may be attached to particular sequences using statistical mull models. We analyze an example experiment and show how a particular dominant flow structure can be identified from the conditional probability of octant sequences. The frequency of this structure corresponds to the dominant peak in the velocity spectra and exerts a high proportion of the total shear stress. We link this structure explicitly to the propensity for sediment entrainment and show that greater insight into sediment entrainment can be obtained by disaggregating those octants that occur within the identified macroturbulence structure from those that do not. Hence, this work goes beyond critiques of Reynolds stress approaches to bed load entrainment that highlight the importance of outward interactions, to identifying and prioritizing the quadrants/octants that define particular flow structures. Key Points <list list-type=''bulleted'' id=''jgrf20196-list-0001''> <list-item id=''jgrf20196-li-0001''>A new method for analysing single point velocity data is presented <list-item id=''jgrf20196-li-0002''>Flow structures are identified by a sequence of flow states (termed octants) <list-item id=''jgrf20196-li-0003''>The identified structure exerts high stresses and causes bed-load entrainment
Resumo:
Recent laboratory studies have suggested that heart rate variability (HRV) may be an appropriate criterion for training load (TL) quantification. The aim of this study was to validate a novel HRV index that may be used to assess TL in field conditions. Eleven well-trained long-distance male runners performed four exercises of different duration and intensity. TL was evaluated using Foster and Banister methods. In addition, HRV measurements were performed 5 minutes before exercise and 5 and 30 minutes after exercise. We calculated HRV index (TLHRV) based on the ratio between HRV decrease during exercise and HRV increase during recovery. HRV decrease during exercise was strongly correlated with exercise intensity (R = -0.70; p < 0.01) but not with exercise duration or training volume. TLHRV index was correlated with Foster (R = 0.61; p = 0.01) and Banister (R = 0.57; p = 0.01) methods. This study confirms that HRV changes during exercise and recovery phase are affected by both intensity and physiological impact of the exercise. Since the TLHRV formula takes into account the disturbance and the return to homeostatic balance induced by exercise, this new method provides an objective and rational TL index. However, some simplification of the protocol measurement could be envisaged for field use.
Resumo:
The objective of research was to analyse the potential of Normalized Difference Vegetation Index (NDVI) maps from satellite images, yield maps and grapevine fertility and load variables to delineate zones with different wine grape properties for selective harvesting. Two vineyard blocks located in NE Spain (Cabernet Sauvignon and Syrah) were analysed. The NDVI was computed from a Quickbird-2 multi-spectral image at veraison (July 2005). Yield data was acquired by means of a yield monitor during September 2005. Other variables, such as the number of buds, number of shoots, number of wine grape clusters and weight of 100 berries were sampled in a 10 rows × 5 vines pattern and used as input variables, in combination with the NDVI, to define the clusters as alternative to yield maps. Two days prior to the harvesting, grape samples were taken. The analysed variables were probable alcoholic degree, pH of the juice, total acidity, total phenolics, colour, anthocyanins and tannins. The input variables, alone or in combination, were clustered (2 and 3 Clusters) by using the ISODATA algorithm, and an analysis of variance and a multiple rang test were performed. The results show that the zones derived from the NDVI maps are more effective to differentiate grape maturity and quality variables than the zones derived from the yield maps. The inclusion of other grapevine fertility and load variables did not improve the results.
Resumo:
BACKGROUND: Rhinovirus is the most common cause of respiratory viral infections and leads to frequent respiratory symptoms in lung transplant recipients. However, it remains unknown whether the rhinovirus load correlates with the severity of symptoms. OBJECTIVES: This study aimed to better characterize the pathogenesis of rhinoviral infection and the way in which viral load correlates with symptoms. STUDY DESIGN: We assessed rhinovirus load in positive upper respiratory specimens of patients enrolled prospectively in a cohort of 116 lung transplant recipients. Rhinovirus load was quantified according to a validated in-house, real-time, reverse transcription polymerase chain reaction in pooled nasopharyngeal and pharyngeal swabs. Symptoms were recorded in a standardised case report form completed at each screening/routine follow-up visit, or during any emergency visit occurring during the 3-year study. RESULTS: Rhinovirus infections were very frequent, including in asymptomatic patients not seeking a specific medical consultation. Rhinovirus load ranged between 4.1 and 8.3 log copies/ml according to the type of visit and clinical presentation. Patients with highest symptom scores tended to have higher viral loads, particularly those presenting systemic symptoms. When considering symptoms individually, rhinovirus load was significantly higher in the presence of symptoms such as sore throat, fever, sputum production, cough, and fatigue. There was no association between tacrolimus levels and rhinovirus load. CONCLUSIONS: Rhinovirus infections are very frequent in lung transplant recipients and rhinoviral load in the upper respiratory tract is relatively high even in asymptomatic patients. Patients with the highest symptom scores tend to have a higher rhinovirus load.