943 resultados para automated full waveform logging system


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A solution of (18)F was standardised with a 4pibeta-4pigamma coincidence counting system in which the beta detector is a one-inch diameter cylindrical UPS89 plastic scintillator, positioned at the bottom of a well-type 5''x5'' NaI(Tl) gamma-ray detector. Almost full detection efficiency-which was varied downwards electronically-was achieved in the beta-channel. Aliquots of this (18)F solution were also measured using 4pigamma NaI(Tl) integral counting and Monte Carlo calculated efficiencies as well as the CIEMAT-NIST method. Secondary measurements of the same solution were also performed with an IG11 ionisation chamber whose equivalent activity is traceable to the Système International de Référence through the contribution IRA-METAS made to it in 2001; IRA's degree of equivalence was found to be close to the key comparison reference value (KCRV). The (18)F activity predicted by this coincidence system agrees closely with the ionisation chamber measurement and is compatible within one standard deviation of the other primary measurements. This work demonstrates that our new coincidence system can standardise short-lived radionuclides used in nuclear medicine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of the investigation of the use of automated fingerprint identification systems (AFIS) for the evaluation of fingerprint evidence, the current study presents investigations into the variability of scores from an AFIS system when fingermarks from a known donor are compared to fingerprints that are not from the same source. The ultimate goal is to propose a model, based on likelihood ratios, which allows the evaluation of mark-to-print comparisons. In particular, this model, through its use of AFIS technology, benefits from the possibility of using a large amount of data, as well as from an already built-in proximity measure, the AFIS score. More precisely, the numerator of the LR is obtained from scores issued from comparisons between impressions from the same source and showing the same minutia configuration. The denominator of the LR is obtained by extracting scores from comparisons of the questioned mark with a database of non-matching sources. This paper focuses solely on the assignment of the denominator of the LR. We refer to it by the generic term of between-finger variability. The issues addressed in this paper in relation to between-finger variability are the required sample size, the influence of the finger number and general pattern, as well as that of the number of minutiae included and their configuration on a given finger. Results show that reliable estimation of between-finger variability is feasible with 10,000 scores. These scores should come from the appropriate finger number/general pattern combination as defined by the mark. Furthermore, strategies of obtaining between-finger variability when these elements cannot be conclusively seen on the mark (and its position with respect to other marks for finger number) have been presented. These results immediately allow case-by-case estimation of the between-finger variability in an operational setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fine particulate matter from traffic increases mortality and morbidity. An important source of traffic particles is brake wear. American studies reported cars to emit break wear particles at a rate of about 11mg/km to 20mg/km of driven distance. A German study estimated that break wear contributes about 12.5% to 21% of the total traffic particle emissions. The goal of this study was to build a system that allows the study of brake wear particle emissions during different braking behaviours of different car and brake types. The particles should be characterize in terms of size, number, metal, and elemental and organic carbon composition. In addition, the influence of different deceleration schemes on the particle composition and size distribution should be studied. Finally, this system should allow exposing human cell cultures to these particles. An exposure-box (0.25 cubic-m volume) was built that can be mounted around a car's braking system. This allows exposing cells to fresh brake wear particles. Concentrations of particle numbers, mass and surface, metals, and carbon compounds were quantified. Tests were conducted with A549 lung epithelial cells. Five different cars and two typical braking behaviours (full stop and normal deceleration) were tested. Particle number and size distribution was analysed for the first six minutes. In this time, two braking events occurred. Full stop produced significantly higher particle concentrations than normal deceleration (average of 23'000 vs. 10'400 #/cm3, p= 0.016). The particle number distribution was bi-modal with one peak at 60 to 100 nm (depending on the tested car and braking behaviour) and a second peak at 200 to 400 nm. Metal concentrations varied depending on the tested car type. Iron (range of 163 to 15'600 μg/m3) and Manganese (range of 0.9 to 135 μg/m3) were present in all samples, while Copper was absent in some samples (<6 to 1220 μg/m3). The overall "fleet" metal ratio was Fe:Cu:Mn = 128:14:1. Temperature and humidity varied little. A549-cells were successfully exposed in the various experimental settings and retained their viability. Culture supernatant was stored and cell culture samples were fixated to test for inflammatory response. Analysis of these samples is ongoing. The established system allowed testing brake wear particle emissions from real-world cars. The large variability of chemical composition and emitted amounts of brake wear particles between car models seems to be related to differences between brake pad compositions of different producers. Initial results suggest that the conditions inside the exposure box allow exposing human lung epithelial cells to freshly produced brake wear particles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Equality with men in the world of paid work has been a major feminist objective. Given that work in the `public' sphere has historically been shaped on the assumption that the `worker' will be male, then national employment systems which facilitate masculine employment patterns (i.e. full-time work and unbroken employment careers) might be expected to be more likely to generate gender equality. This paper compares women's employment in France (where `masculine' careers for women are common) and Britain (where part-time work and broken employment careers are more likely) at the macro, meso (occupational), and micro (individual) levels. The two occupations studied are finance and pharmacy. The evidence presented suggests that there are considerable similarities between women in the two countries at the occupational and individual level, despite national variations. In the light of this evidence, structural and individual explanations of women's employment behaviour are examined, and the continuing significance of structural constraint on the patterning of gender relations is emphasised.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background :¦In addition to opportunistic infections of the central nervous system (CNS), which are due to immunosuppression related to HIV, the latter virus, itself, can cause neuropathological abnormalities which are located mainly in the basal ganglia and are characterized by microglial giant cells, reactive astrocytosis and perivascular monocytes. This HIV encephalopathy is characterized, clinically, by psycho-motor slowing, memory loss, difficulties in complex tasks requiring executive functions, as well as motor disorders .These cognitive deficits are grouped under the acronym of HIV-associated neurocognitive disorders (HAND). In fact, HANDs are subdivided in three groups in accordance with the severity of the cognitive impairment: Asymptomatic Neurocognitive Impairment (ANI), Mild/moderate Neurocognitive Disorders (MND) and HIV Associated Dementia (HAD).¦While the incidence of HAD has significantly decreased in the era of combined antiretrobiral therapy (cART), the prevalence of milder forms of HIV-associated neurocognitive disorders HAND seem to have increased. There are many potential reasons to explain this state of facts.¦An important question is to understand how soon the brain may be affected by HIV. Since performing a biopsy in these patients is not an issue, the study of the CSF represents the best available way to look at putative biomarkers of inflammation/neurodegeneration in the CNS. Here, we wanted to examined the putative usefulness of different biomarkers as early indicators of anti-retroviral failure at the level of the CNS. We chose to study the CSF levels of:¦Amyloid-β 1-42 (Aβ42), Tau total (tTau), phosphorylated Tau (pTau), Neopterin and S100-β.¦Indeed, these molecules are representative biomarkers of the major cells of the CNS, i.e. neurons,¦macrophages/microglia and astrocytes.¦To examine how sensitive were these CSF biomarkers to indicate CNS insults caused by HIV, we proposed to take advantage of the MOST (Monotherapy Switzerland/Thailand study) study, recently published in AIDS. Thus, we collaborated with Prof. Pietro Vernazza in St-Gall. In MOST study, monotherapy (MT) consisting in ritonavir-boosted lopinavir (LPV/r) was compared to continuous conventional antiretroviral therapy including several molecules, hereafter referred as CT¦Methods :We tested 61 cerebrospinal fluid (CSF) samples from 52 patients enrolled in MOST, including 34 CSF samples of CT and 27 of MT (mean duration on MT: 47+20 weeks) in patients who maintained full VL suppression in blood (<50cps/ml). Using enzyme-linked immunosorbent assay (ELISA), we determined the CSF concentration of S100-beta (astrocytosis), neopterin (microglia, inflammation), total Tau (tTau), phosphorylated Tau (pTau), and amyloid-beta 1-42 (Abeta), the latter three markers indicating neuronal damages. The CSF samples of 37 HIV-negative patients with Alzheimer dementia (AD) served as controls. Results are expressed in pg/ml and reported as median ± interquartile range. Mann Whitney-U test was used to compare the results of a given biomarker between two groups and the Fisher test to compare frequencies.¦Results: We found a higher concentration of S100-beta (570±1132) and neopterin (2.5±2.9) in the CSF of MT versus CT (0±532, p=0.002 and 1.2±2.5, p=0.058, respectively). A cutoff of 940 pg/ml for S100-beta allowed to discriminate MT (11 above versus 16 below) from CT (1 vs 33, p=0.0003). At a lesser extent, a cutoff of 11 pg/ml for neopterin separated MT (4 above versus 23) from CT (0 vs 34, p=0.034) (Figure).¦In AD, tTau was higher (270±414) and Abeta lower (234±328) than in CT (150±153, p=0.0078, and 466±489, p=0.007, respectively). Such as for CT, Abeta was lower in AD than in MT (390±412, p=0.01). However, contrasting with CT, the levels of tTau were not different between AD and MT (199±177, p=0.11). S100b (173±214; p=0.0006) and neopterin (1.1±0.9; p=0.0014) were lower in AD than MT.¦Conclusions: Despite full VL-suppression in blood, HIV monotherapy is sufficient to trigger inflammation and, especially, astrocytosis. CSF markers of patients on CT have the same profile as reported for healthy subjects, suggesting that CT permits a good control of HIV in the brain. Finally, the levels of tTau, which are relatively similar between AD and MT patients, suggest that neurons are damaged during monotherapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emotional and neuroendocrine regulation have been shown to be associated. However, results are inconsistent. This paper explores the functioning and relationships between these two systems in 54 healthy preterm and 25 full-term born infants at six months of age. Results showed significant differences between very preterm and full-term children in emotional intensity and regulation, as well as in neuroendocrine regulation. No evidence of an association between neuroendocrine and emotional regulations was found. Results suggest a possible delay in the maturation of the neuroendocrine system as well as in emotional regulation in very preterm infants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report experimental and numerical results showing how certain N-dimensional dynamical systems are able to exhibit complex time evolutions based on the nonlinear combination of N-1 oscillation modes. The experiments have been done with a family of thermo-optical systems of effective dynamical dimension varying from 1 to 6. The corresponding mathematical model is an N-dimensional vector field based on a scalar-valued nonlinear function of a single variable that is a linear combination of all the dynamic variables. We show how the complex evolutions appear associated with the occurrence of successive Hopf bifurcations in a saddle-node pair of fixed points up to exhaust their instability capabilities in N dimensions. For this reason the observed phenomenon is denoted as the full instability behavior of the dynamical system. The process through which the attractor responsible for the observed time evolution is formed may be rather complex and difficult to characterize. Nevertheless, the well-organized structure of the time signals suggests some generic mechanism of nonlinear mode mixing that we associate with the cluster of invariant sets emerging from the pair of fixed points and with the influence of the neighboring saddle sets on the flow nearby the attractor. The generation of invariant tori is likely during the full instability development and the global process may be considered as a generalized Landau scenario for the emergence of irregular and complex behavior through the nonlinear superposition of oscillatory motions

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In hyperdiploid acute lymphoblastic leukaemia (ALL), the simultaneous occurrence of specific aneuploidies confers a more favourable outcome than hyperdiploidy alone. Interphase (I) FISH complements conventional cytogenetics (CC) through its sensitivity and ability to detect chromosome aberrations in non-dividing cells. To overcome the limits of manual I-FISH, we developed an automated four-colour I-FISH approach and assessed its ability to detect concurrent aneuploidies in ALL. I-FISH was performed using centromeric probes for chromosomes 4, 6, 10 and 17. Parameters established for automatic nucleus selection and signal detection were evaluated (3 controls). Cut-off values were determined (10 controls, 1000 nuclei/case). Combinations of aneuploidies were considered relevant when each aneuploidy was individually significant. Results obtained in 10 ALL patients (1500 nuclei/patient) were compared with those by CC. Various combinations of aneuploidies were identified. All clones detected by CC were observed by I-FISH. I-FISH revealed numerous additional abnormal clones, ranging between 0.1% and 31.6%, based on the large number of nuclei evaluated. Four-colour automated I-FISH permits the identification of concurrent aneuploidies of prognostic significance in hyperdiploid ALL. Large numbers of cells can be analysed rapidly by this method. Owing to its high sensitivity, the method provides a powerful tool for the detection of small abnormal clones at diagnosis and during follow up. Compared to CC, it generates a more detailed cytogenetic picture, the biological and clinical significance of which merits further evaluation. Once optimised for a given set of probes, the system can be easily adapted for other probe combinations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

HAMAP (High-quality Automated and Manual Annotation of Proteins-available at http://hamap.expasy.org/) is a system for the automatic classification and annotation of protein sequences. HAMAP provides annotation of the same quality and detail as UniProtKB/Swiss-Prot, using manually curated profiles for protein sequence family classification and expert curated rules for functional annotation of family members. HAMAP data and tools are made available through our website and as part of the UniRule pipeline of UniProt, providing annotation for millions of unreviewed sequences of UniProtKB/TrEMBL. Here we report on the growth of HAMAP and updates to the HAMAP system since our last report in the NAR Database Issue of 2013. We continue to augment HAMAP with new family profiles and annotation rules as new protein families are characterized and annotated in UniProtKB/Swiss-Prot; the latest version of HAMAP (as of 3 September 2014) contains 1983 family classification profiles and 1998 annotation rules (up from 1780 and 1720). We demonstrate how the complex logic of HAMAP rules allows for precise annotation of individual functional variants within large homologous protein families. We also describe improvements to our web-based tool HAMAP-Scan which simplify the classification and annotation of sequences, and the incorporation of an improved sequence-profile search algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research involved two studies: one to determine the local geoid to obtain mean sea level elevation from a global positioning system (GPS) to an accuracy of ±2 cm, and the other to determine the location of roadside features such as mile posts and stop signs for safety studies, geographic information systems (GIS), and maintenance applications, from video imageries collected by a van traveling at traffic speed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we perform a mathematical analyse of profits and losses indirect and full costing. They are compared in different situations, mainlythe utilisation of productive capacity and the existence of beginninginventories. Direct costing was conceived as a system of cost accountingwhich would show profits as a function of sales. In full costing profitsdepend on available combinations of sales, production, costs of beginninginventories, etc., and information displayed in financial statements displayappears incongruent. Differences in profits with full and direct costingincrease when full costing allocates fixed costs according to normalproduction, in some cases differences, and financial statements would showmore incongruent performance. It is concluded about the importance thatprofit and loss statement expresses profits in both costing systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Revenue management (RM) is a complicated business process that can best be described ascontrol of sales (using prices, restrictions, or capacity), usually using software as a tool to aiddecisions. RM software can play a mere informative role, supplying analysts with formatted andsummarized data who use it to make control decisions (setting a price or allocating capacity fora price point), or, play a deeper role, automating the decisions process completely, at the otherextreme. The RM models and algorithms in the academic literature by and large concentrateon the latter, completely automated, level of functionality.A firm considering using a new RM model or RM system needs to evaluate its performance.Academic papers justify the performance of their models using simulations, where customerbooking requests are simulated according to some process and model, and the revenue perfor-mance of the algorithm compared to an alternate set of algorithms. Such simulations, whilean accepted part of the academic literature, and indeed providing research insight, often lackcredibility with management. Even methodologically, they are usually awed, as the simula-tions only test \within-model" performance, and say nothing as to the appropriateness of themodel in the first place. Even simulations that test against alternate models or competition arelimited by their inherent necessity on fixing some model as the universe for their testing. Theseproblems are exacerbated with RM models that attempt to model customer purchase behav-ior or competition, as the right models for competitive actions or customer purchases remainsomewhat of a mystery, or at least with no consensus on their validity.How then to validate a model? Putting it another way, we want to show that a particularmodel or algorithm is the cause of a certain improvement to the RM process compared to theexisting process. We take care to emphasize that we want to prove the said model as the causeof performance, and to compare against a (incumbent) process rather than against an alternatemodel.In this paper we describe a \live" testing experiment that we conducted at Iberia Airlineson a set of flights. A set of competing algorithms control a set of flights during adjacentweeks, and their behavior and results are observed over a relatively long period of time (9months). In parallel, a group of control flights were managed using the traditional mix of manualand algorithmic control (incumbent system). Such \sandbox" testing, while common at manylarge internet search and e-commerce companies is relatively rare in the revenue managementarea. Sandbox testing has an undisputable model of customer behavior but the experimentaldesign and analysis of results is less clear. In this paper we describe the philosophy behind theexperiment, the organizational challenges, the design and setup of the experiment, and outlinethe analysis of the results. This paper is a complement to a (more technical) related paper thatdescribes the econometrics and statistical analysis of the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last century, numerous techniques have been developed to analyze the movement of humans while walking and running. The combined use of kinematics and kinetics methods, mainly based on high speed video analysis and forceplate, have permitted a comprehensive description of locomotion process in terms of energetics and biomechanics. While the different phases of a single gait cycle are well understood, there is an increasing interest to know how the neuro-motor system controls gait form stride to stride. Indeed, it was observed that neurodegenerative diseases and aging could impact gait stability and gait parameters steadiness. From both clinical and fundamental research perspectives, there is therefore a need to develop techniques to accurately track gait parameters stride-by-stride over a long period with minimal constraints to patients. In this context, high accuracy satellite positioning can provide an alternative tool to monitor outdoor walking. Indeed, the high-end GPS receivers provide centimeter accuracy positioning with 5-20 Hz sampling rate: this allows the stride-by-stride assessment of a number of basic gait parameters--such as walking speed, step length and step frequency--that can be tracked over several thousand consecutive strides in free-living conditions. Furthermore, long-range correlations and fractal-like pattern was observed in those time series. As compared to other classical methods, GPS seems a promising technology in the field of gait variability analysis. However, relative high complexity and expensiveness--combined with a usability which requires further improvement--remain obstacles to the full development of the GPS technology in human applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With increased activity and reduced financial and human resources, there is a need for automation in clinical bacteriology. Initial processing of clinical samples includes repetitive and fastidious steps. These tasks are suitable for automation, and several instruments are now available on the market, including the WASP (Copan), Previ-Isola (BioMerieux), Innova (Becton-Dickinson) and Inoqula (KIESTRA) systems. These new instruments allow efficient and accurate inoculation of samples, including four main steps: (i) selecting the appropriate Petri dish; (ii) inoculating the sample; (iii) spreading the inoculum on agar plates to obtain, upon incubation, well-separated bacterial colonies; and (iv) accurate labelling and sorting of each inoculated media. The challenge for clinical bacteriologists is to determine what is the ideal automated system for their own laboratory. Indeed, different solutions will be preferred, according to the number and variety of samples, and to the types of sample that will be processed with the automated system. The final choice is troublesome, because audits proposed by industrials risk being biased towards the solution proposed by their company, and because these automated systems may not be easily tested on site prior to the final decision, owing to the complexity of computer connections between the laboratory information system and the instrument. This article thus summarizes the main parameters that need to be taken into account for choosing the optimal system, and provides some clues to help clinical bacteriologists to make their choice.