929 resultados para source code analysis
Resumo:
γ-Hydroxybutyric acid (GHB) is an endogenous short-chain fatty acid popular as a recreational drug due to sedative and euphoric effects, but also often implicated in drug-facilitated sexual assaults owing to disinhibition and amnesic properties. Whilst discrimination between endogenous and exogenous GHB as required in intoxication cases may be achieved by the determination of the carbon isotope content, such information has not yet been exploited to answer source inference questions of forensic investigation and intelligence interests. However, potential isotopic fractionation effects occurring through the whole metabolism of GHB may be a major concern in this regard. Thus, urine specimens from six healthy male volunteers who ingested prescription GHB sodium salt, marketed as Xyrem(®), were analysed by means of gas chromatography/combustion/isotope ratio mass spectrometry to assess this particular topic. A very narrow range of δ(13)C values, spreading from -24.810/00 to -25.060/00, was observed, whilst mean δ(13)C value of Xyrem(®) corresponded to -24.990/00. Since urine samples and prescription drug could not be distinguished by means of statistical analysis, carbon isotopic effects and subsequent influence on δ(13)C values through GHB metabolism as a whole could be ruled out. Thus, a link between GHB as a raw matrix and found in a biological fluid may be established, bringing relevant information regarding source inference evaluation. Therefore, this study supports a diversified scope of exploitation for stable isotopes characterized in biological matrices from investigations on intoxication cases to drug intelligence programmes.
Resumo:
The generalization of simple correspondence analysis, for two categorical variables, to multiple correspondence analysis where they may be three or more variables, is not straighforward, both from a mathematical and computational point of view. In this paper we detail the exact computational steps involved in performing a multiple correspondence analysis, including the special aspects of adjusting the principal inertias to correct the percentages of inertia, supplementary points and subset analysis. Furthermore, we give the algorithm for joint correspondence analysis where the cross-tabulations of all unique pairs of variables are analysed jointly. The code in the R language for every step of the computations is given, as well as the results of each computation.
Resumo:
A major issue in the application of waveform inversion methods to crosshole georadar data is the accurate estimation of the source wavelet. Here, we explore the viability and robustness of incorporating this step into a time-domain waveform inversion procedure through an iterative deconvolution approach. Our results indicate that, at least in non-dispersive electrical environments, such an approach provides remarkably accurate and robust estimates of the source wavelet even in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity. Our results also indicate that the proposed source wavelet estimation approach is relatively insensitive to ambient noise and to the phase characteristics of the starting wavelet. Finally, there appears to be little-to-no trade-off between the wavelet estimation and the tomographic imaging procedures.
Resumo:
This thesis gives an overview of the validation process for thermal hydraulic system codes and it presents in more detail the assessment and validation of the French code CATHARE for VVER calculations. Three assessment cases are presented: loop seal clearing, core reflooding and flow in a horizontal steam generator. The experience gained during these assessment and validation calculations has been used to analyze the behavior of the horizontal steam generator and the natural circulation in the geometry of the Loviisa nuclear power plant. The cases presented are not exhaustive, but they give a good overview of the work performed by the personnel of Lappeenranta University of Technology (LUT). Large part of the work has been performed in co-operation with the CATHARE-team in Grenoble, France. The design of a Russian type pressurized water reactor, VVER, differs from that of a Western-type PWR. Most of thermal-hydraulic system codes are validated only for the Western-type PWRs. Thus, the codes should be assessed and validated also for VVER design in order to establish any weaknesses in the models. This information is needed before codes can be used for the safety analysis. Theresults of the assessment and validation calculations presented here show that the CATHARE code can be used also for the thermal-hydraulic safety studies for VVER type plants. However, some areas have been indicated which need to be reassessed after further experimental data become available. These areas are mostly connected to the horizontal stem generators, like condensation and phase separation in primary side tubes. The work presented in this thesis covers a large numberof the phenomena included in the CSNI code validation matrices for small and intermediate leaks and for transients. Also some of the phenomena included in the matrix for large break LOCAs are covered. The matrices for code validation for VVER applications should be used when future experimental programs are planned for code validation.
Resumo:
Pulsewidth-modulated (PWM) rectifier technology is increasingly used in industrial applications like variable-speed motor drives, since it offers several desired features such as sinusoidal input currents, controllable power factor, bidirectional power flow and high quality DC output voltage. To achieve these features,however, an effective control system with fast and accurate current and DC voltage responses is required. From various control strategies proposed to meet these control objectives, in most cases the commonly known principle of the synchronous-frame current vector control along with some space-vector PWM scheme have been applied. Recently, however, new control approaches analogous to the well-established direct torque control (DTC) method for electrical machines have also emerged to implement a high-performance PWM rectifier. In this thesis the concepts of classical synchronous-frame current control and DTC-based PWM rectifier control are combined and a new converter-flux-based current control (CFCC) scheme is introduced. To achieve sufficient dynamic performance and to ensure a stable operation, the proposed control system is thoroughly analysed and simple rules for the controller design are suggested. Special attention is paid to the estimationof the converter flux, which is the key element of converter-flux-based control. Discrete-time implementation is also discussed. Line-voltage-sensorless reactive reactive power control methods for the L- and LCL-type line filters are presented. For the L-filter an open-loop control law for the d-axis current referenceis proposed. In the case of the LCL-filter the combined open-loop control and feedback control is proposed. The influence of the erroneous filter parameter estimates on the accuracy of the developed control schemes is also discussed. A newzero vector selection rule for suppressing the zero-sequence current in parallel-connected PWM rectifiers is proposed. With this method a truly standalone and independent control of the converter units is allowed and traditional transformer isolation and synchronised-control-based solutions are avoided. The implementation requires only one additional current sensor. The proposed schemes are evaluated by the simulations and laboratory experiments. A satisfactory performance and good agreement between the theory and practice are demonstrated.
Resumo:
IIn electric drives, frequency converters are used to generatefor the electric motor the AC voltage with variable frequency and amplitude. When considering the annual sale of drives in values of money and units sold, the use of low-performance drives appears to be in predominant. These drives have tobe very cost effective to manufacture and use, while they are also expected to fulfill the harmonic distortion standards. One of the objectives has also been to extend the lifetime of the frequency converter. In a traditional frequency converter, a relatively large electrolytic DC-link capacitor is used. Electrolytic capacitors are large, heavy and rather expensive components. In many cases, the lifetime of the electrolytic capacitor is the main factor limiting the lifetime of the frequency converter. To overcome the problem, the electrolytic capacitor is replaced with a metallized polypropylene film capacitor (MPPF). The MPPF has improved properties when compared to the electrolytic capacitor. By replacing the electrolytic capacitor with a film capacitor the energy storage of the DC-linkwill be decreased. Thus, the instantaneous power supplied to the motor correlates with the instantaneous power taken from the network. This yields a continuousDC-link current fed by the diode rectifier bridge. As a consequence, the line current harmonics clearly decrease. Because of the decreased energy storage, the DC-link voltage fluctuates. This sets additional conditions to the controllers of the frequency converter to compensate the fluctuation from the supplied motor phase voltages. In this work three-phase and single-phase frequency converters with small DC-link capacitor are analyzed. The evaluation is obtained with simulations and laboratory measurements.
Resumo:
BACKGROUND AND PURPOSE: Information about outcomes in Embolic Stroke of Undetermined Source (ESUS) patients is unavailable. This study provides a detailed analysis of outcomes of a large ESUS population. METHODS: Data set was derived from the Athens Stroke Registry. ESUS was defined according to the Cryptogenic Stroke/ESUS International Working Group criteria. End points were mortality, stroke recurrence, functional outcome, and a composite cardiovascular end point comprising recurrent stroke, myocardial infarction, aortic aneurysm rupture, systemic embolism, or sudden cardiac death. We performed Kaplan-Meier analyses to estimate cumulative probabilities of outcomes by stroke type and Cox-regression to investigate whether stroke type was outcome predictor. RESULTS: 2731 patients were followed-up for a mean of 30.5±24.1months. There were 73 (26.5%) deaths, 60 (21.8%) recurrences, and 78 (28.4%) composite cardiovascular end points in the 275 ESUS patients. The cumulative probability of survival in ESUS was 65.6% (95% confidence intervals [CI], 58.9%-72.2%), significantly higher compared with cardioembolic stroke (38.8%, 95% CI, 34.9%-42.7%). The cumulative probability of stroke recurrence in ESUS was 29.0% (95% CI, 22.3%-35.7%), similar to cardioembolic strokes (26.8%, 95% CI, 22.1%-31.5%), but significantly higher compared with all types of noncardioembolic stroke. One hundred seventy-two (62.5%) ESUS patients had favorable functional outcome compared with 280 (32.2%) in cardioembolic and 303 (60.9%) in large-artery atherosclerotic. ESUS patients had similar risk of composite cardiovascular end point as all other stroke types, with the exception of lacunar strokes, which had significantly lower risk (adjusted hazard ratio, 0.70 [95% CI, 0.52-0.94]). CONCLUSIONS: Long-term mortality risk in ESUS is lower compared with cardioembolic strokes, despite similar rates of recurrence and composite cardiovascular end point. Recurrent stroke risk is higher in ESUS than in noncardioembolic strokes.
Resumo:
The maximum realizable power throughput of power electronic converters may be limited or constrained by technical or economical considerations. One solution to this problemis to connect several power converter units in parallel. The parallel connection can be used to increase the current carrying capacity of the overall system beyond the ratings of individual power converter units. Thus, it is possible to use several lower-power converter units, produced in large quantities, as building blocks to construct high-power converters in a modular manner. High-power converters realized by using parallel connection are needed for example in multimegawatt wind power generation systems. Parallel connection of power converter units is also required in emerging applications such as photovoltaic and fuel cell power conversion. The parallel operation of power converter units is not, however, problem free. This is because parallel-operating units are subject to overcurrent stresses, which are caused by unequal load current sharing or currents that flow between the units. Commonly, the term ’circulatingcurrent’ is used to describe both the unequal load current sharing and the currents flowing between the units. Circulating currents, again, are caused by component tolerances and asynchronous operation of the parallel units. Parallel-operating units are also subject to stresses caused by unequal thermal stress distribution. Both of these problemscan, nevertheless, be handled with a proper circulating current control. To design an effective circulating current control system, we need information about circulating current dynamics. The dynamics of the circulating currents can be investigated by developing appropriate mathematical models. In this dissertation, circulating current models aredeveloped for two different types of parallel two-level three-phase inverter configurations. Themodels, which are developed for an arbitrary number of parallel units, provide a framework for analyzing circulating current generation mechanisms and developing circulating current control systems. In addition to developing circulating current models, modulation of parallel inverters is considered. It is illustrated that depending on the parallel inverter configuration and the modulation method applied, common-mode circulating currents may be excited as a consequence of the differential-mode circulating current control. To prevent the common-mode circulating currents that are caused by the modulation, a dual modulator method is introduced. The dual modulator basically consists of two independently operating modulators, the outputs of which eventually constitute the switching commands of the inverter. The two independently operating modulators are referred to as primary and secondary modulators. In its intended usage, the same voltage vector is fed to the primary modulators of each parallel unit, and the inputs of the secondary modulators are obtained from the circulating current controllers. To ensure that voltage commands obtained from the circulating current controllers are realizable, it must be guaranteed that the inverter is not driven into saturation by the primary modulator. The inverter saturation can be prevented by limiting the inputs of the primary and secondary modulators. Because of this, also a limitation algorithm is proposed. The operation of both the proposed dual modulator and the limitation algorithm is verified experimentally.
Resumo:
Tässä pro gradu -tutkielmassa käsittelen lähde- ja kohdetekstikeskeisyyttä näytelmäkääntämisessä. Tutkimuskohteina olivat käännösten sanasto, syntaksi, näyttämötekniikka, kielikuvat, sanaleikit, runomitta ja tyyli. Tutkimuksen tarkoituksena oli selvittää, näkyykö teoreettinen painopisteen siirtyminen lähdetekstikeskeisyydestä kohdetekstikeskeisyyteen suomenkielisessä näytelmäkääntämisessä. Oletuksena oli, että siirtyminen näkyy käytetyissä käännösstrategioissa. Tutkimuksen teoriaosuudessa käsitellään ensin lähde- ja kohdetekstikeskeisiä käännösteorioita. Ensin esitellään kaksi lähdetekstikeskeistä teoriaa, jotka ovat Catfordin (1965) muodollinen vastaavuus ja Nidan (1964) dynaaminen ekvivalenssi. Kohdetekstikeskeisistä teorioista käsitellään Touryn (1980) ja Newmarkin (1981) teoreettisia näkemyksiä sekä Reiss ja Vermeerin (1986) esittelemää skopos-teoriaa. Vieraannuttamisen ja kotouttamisen periaatteet esitellään lyhyesti. Teoriaosuudessa käsitellään myös näytelmäkääntämistä, William Shakespearen kieltä ja siihen liittyviä käännösongelmia. Lisäksi esittelen lyhyesti Shakespearen kääntämistä Suomessa ja Julius Caesarin neljä kääntäjää. Tutkimuksen materiaalina oli neljä Shakespearen Julius Caesar –näytelmän suomennosta, joista Paavo Cajanderin käännös on julkaistu vuonna 1883, Eeva-Liisa Mannerin vuonna 1983, Lauri Siparin vuonna 2006 ja Jarkko Laineen vuonna 2007. Analyysissa käännöksiä verrattiin lähdetekstiin ja toisiinsa ja vertailtiin kääntäjien tekemiä käännösratkaisuja. Tulokset olivat oletuksen mukaisia. Lähdetekstikeskeisiä käännösstrategioita oli käytetty uusissa käännöksissä vähemmän kuin vanhemmissa. Kohdetekstikeskeiset strategiat erosivat huomattavasti toisistaan ja uusinta käännöstä voi sanoa adaptaatioksi. Jatkotutkimuksissa tulisi materiaali laajentaa koskemaan muitakin Shakespearen näytelmien suomennoksia. Eri aikakausien käännöksiä tulisi verrata keskenään ja toisiinsa, jotta voitaisiin luotettavasti kuvata muutosta lähde- ja kohdetekstikeskeisten käännösstrategioiden käytössä ja eri aikakausien tyypillisten strategioiden kartoittamiseksi.
Resumo:
We transplanted 47 patients with Fanconi anemia using an alternative source of hematopoietic cells. The patients were assigned to the following groups: group 1, unrelated bone marrow (N = 15); group 2, unrelated cord blood (N = 17), and group 3, related non-sibling bone marrow (N = 15). Twenty-four patients (51%) had complete engraftment, which was not influenced by gender (P = 0.87), age (P = 0.45), dose of cyclophosphamide (P = 0.80), nucleated cell dose infused (P = 0.60), or use of anti-T serotherapy (P = 0.20). Favorable factors for superior engraftment were full HLA compatibility (independent of the source of cells; P = 0.007) and use of a fludarabine-based conditioning regimen (P = 0.046). Unfavorable factors were > or = 25 transfusions pre-transplant (P = 0.011) and degree of HLA disparity (P = 0.007). Intensity of mucositis (P = 0.50) and use of androgen prior to transplant had no influence on survival (P = 0.80). Acute graft-versus-host disease (GVHD) grade II-IV and chronic GVHD were diagnosed in 47 and 23% of available patients, respectively, and infections prevailed as the main cause of death, associated or not with GVHD. Eighteen patients are alive, the Kaplan-Meyer overall survival is 38% at ~8 years, and the best results were obtained with related non-sibling bone marrow patients. Three recommendations emerged from the present study: fludarabine as part of conditioning, transplant in patients with <25 transfusions and avoidance of HLA disparity. In addition, an extended family search (even when consanguinity is not present) seeking for a related non-sibling donor is highly recommended.
Resumo:
Simultaneous measurements of EEG-functional magnetic resonance imaging (fMRI) combine the high temporal resolution of EEG with the distinctive spatial resolution of fMRI. The purpose of this EEG-fMRI study was to search for hemodynamic responses (blood oxygen level-dependent - BOLD responses) associated with interictal activity in a case of right mesial temporal lobe epilepsy before and after a successful selective amygdalohippocampectomy. Therefore, the study found the epileptogenic source by this noninvasive imaging technique and compared the results after removing the atrophied hippocampus. Additionally, the present study investigated the effectiveness of two different ways of localizing epileptiform spike sources, i.e., BOLD contrast and independent component analysis dipole model, by comparing their respective outcomes to the resected epileptogenic region. Our findings suggested a right hippocampus induction of the large interictal activity in the left hemisphere. Although almost a quarter of the dipoles were found near the right hippocampus region, dipole modeling resulted in a widespread distribution, making EEG analysis too weak to precisely determine by itself the source localization even by a sophisticated method of analysis such as independent component analysis. On the other hand, the combined EEG-fMRI technique made it possible to highlight the epileptogenic foci quite efficiently.
Resumo:
Traditionnellement, les applications orientées objets légataires intègrent différents aspects fonctionnels. Ces aspects peuvent être dispersés partout dans le code. Il existe différents types d’aspects : • des aspects qui représentent des fonctionnalités métiers ; • des aspects qui répondent à des exigences non fonctionnelles ou à d’autres considérations de conception comme la robustesse, la distribution, la sécurité, etc. Généralement, le code qui représente ces aspects chevauche plusieurs hiérarchies de classes. Plusieurs chercheurs se sont intéressés à la problématique de la modularisation de ces aspects dans le code : programmation orientée sujets, programmation orientée aspects et programmation orientée vues. Toutes ces méthodes proposent des techniques et des outils pour concevoir des applications orientées objets sous forme de composition de fragments de code qui répondent à différents aspects. La séparation des aspects dans le code a des avantages au niveau de la réutilisation et de la maintenance. Ainsi, il est important d’identifier et de localiser ces aspects dans du code légataire orienté objets. Nous nous intéressons particulièrement aux aspects fonctionnels. En supposant que le code qui répond à un aspect fonctionnel ou fonctionnalité exhibe une certaine cohésion fonctionnelle (dépendances entre les éléments), nous proposons d’identifier de telles fonctionnalités à partir du code. L’idée est d’identifier, en l’absence des paradigmes de la programmation par aspects, les techniques qui permettent l’implémentation des différents aspects fonctionnels dans un code objet. Notre approche consiste à : • identifier les techniques utilisées par les développeurs pour intégrer une fonctionnalité en l’absence des techniques orientées aspects • caractériser l’empreinte de ces techniques sur le code • et développer des outils pour identifier ces empreintes. Ainsi, nous présentons deux approches pour l’identification des fonctionnalités existantes dans du code orienté objets. La première identifie différents patrons de conception qui permettent l’intégration de ces fonctionnalités dans le code. La deuxième utilise l’analyse formelle de concepts pour identifier les fonctionnalités récurrentes dans le code. Nous expérimentons nos deux approches sur des systèmes libres orientés objets pour identifier les différentes fonctionnalités dans le code. Les résultats obtenus montrent l’efficacité de nos approches pour identifier les différentes fonctionnalités dans du code légataire orienté objets et permettent de suggérer des cas de refactorisation.
Resumo:
Dans cet article issu d’une conférence prononcée dans le cadre du Colloque Leg@l.IT (www.legalit.ca), l’auteur offre un rapide survol des fonctionnalités offertes par les systèmes de dépôt électronique de la Cour fédérale et de la Cour canadienne de l’impôt afin de dégager les avantages et inconvénients de chacune des technologies proposées. Cet exercice s’inscrit dans une réflexion plus large sur les conséquences de la migration progressive de certaines juridictions vers le dépôt électronique. Si cette tentative de moderniser le processus judiciaire se veut bénéfique, il demeure qu’un changement technologique d’une telle importance n’est pas sans risques et sans incidences sur les us et coutumes de l’appareil judiciaire. L’auteur se questionne ainsi sur la pratique adoptée par certains tribunaux judiciaires de développer en silo des solutions d’informatisation du processus de gestion des dossiers de la Cour. L’absence de compatibilité des systèmes et le repli vers des modèles propriétaires sont causes de soucis. Qui plus est, en confiant le développement de ces systèmes à des firmes qui en conservent la propriété du code source, ils contribuent à une certaine privatisation du processus rendant la mise en réseau de l’appareil judiciaire d’autant plus difficile. Or, dans la mesure où les systèmes de différents tribunaux seront appelés à communiquer et échanger des données, l’adoption de solutions technologiques compatibles et ouvertes est de mise. Une autre problématique réside dans l’apparente incapacité du législateur de suivre l’évolution vers la virtualisation du processus judiciaire. Le changement technologique impose, dans certains cas, un changement conceptuel difficilement compatible avec la législation applicable. Ce constat implique la nécessité d’un questionnement plus profond sur la pertinence d’adapter le droit à la technologie ou encore la technologie au droit afin d’assurer une coexistence cohérente et effective de ces deux univers.
Resumo:
La révision du code est un procédé essentiel quelque soit la maturité d'un projet; elle cherche à évaluer la contribution apportée par le code soumis par les développeurs. En principe, la révision du code améliore la qualité des changements de code (patches) avant qu'ils ne soient validés dans le repertoire maître du projet. En pratique, l'exécution de ce procédé n'exclu pas la possibilité que certains bugs passent inaperçus. Dans ce document, nous présentons une étude empirique enquétant la révision du code d'un grand projet open source. Nous investissons les relations entre les inspections des reviewers et les facteurs, sur les plans personnel et temporel, qui pourraient affecter la qualité de telles inspections.Premiérement, nous relatons une étude quantitative dans laquelle nous utilisons l'algorithme SSZ pour détecter les modifications et les changements de code favorisant la création de bogues (bug-inducing changes) que nous avons lié avec l'information contenue dans les révisions de code (code review information) extraites du systéme de traçage des erreurs (issue tracking system). Nous avons découvert que les raisons pour lesquelles les réviseurs manquent certains bogues était corrélées autant à leurs caractéristiques personnelles qu'aux propriétés techniques des corrections en cours de revue. Ensuite, nous relatons une étude qualitative invitant les développeurs de chez Mozilla à nous donner leur opinion concernant les attributs favorables à la bonne formulation d'une révision de code. Les résultats de notre sondage suggèrent que les développeurs considèrent les aspects techniques (taille de la correction, nombre de chunks et de modules) autant que les caractéristiques personnelles (l'expérience et review queue) comme des facteurs influant fortement la qualité des revues de code.