201 resultados para SVM method


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estimating the time since the last discharge of firearms and/or spent cartridges may be a useful piece of information in forensic firearm-related cases. The current approach consists of studying the diffusion of selected volatile organic compounds (such as naphthalene) released during the shooting using solid phase micro-extraction (SPME). However, this technique works poorly on handgun car-tridges because the extracted quantities quickly fall below the limit of detection. In order to find more effective solutions and further investigate the aging of organic gunshot residue after the discharge of handgun cartridges, an extensive study was carried out in this work using a novel approach based on high capacity headspace sorptive extraction (HSSE). By adopting this technique, for the first time 51 gunshot residue (GSR) volatile organic compounds could be simultaneously detected from fired handgun cartridge cases. Application to aged specimens showed that many of those compounds presented significant and complementary aging profiles. Compound-to-compound ratios were also tested and proved to be beneficial both in reducing the variability of the aging curves and in enlarging the time window useful in a forensic casework perspective. The obtained results were thus particularly promising for the development of a new complete forensic dating methodology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel hybrid (or multiphysics) algorithm, which couples pore-scale and Darcy descriptions of two-phase flow in porous media. The flow at the pore-scale is described by the Navier?Stokes equations, and the Volume of Fluid (VOF) method is used to model the evolution of the fluid?fluid interface. An extension of the Multiscale Finite Volume (MsFV) method is employed to construct the Darcy-scale problem. First, a set of local interpolators for pressure and velocity is constructed by solving the Navier?Stokes equations; then, a coarse mass-conservation problem is constructed by averaging the pore-scale velocity over the cells of a coarse grid, which act as control volumes; finally, a conservative pore-scale velocity field is reconstructed and used to advect the fluid?fluid interface. The method relies on the localization assumptions used to compute the interpolators (which are quite straightforward extensions of the standard MsFV) and on the postulate that the coarse-scale fluxes are proportional to the coarse-pressure differences. By numerical simulations of two-phase problems, we demonstrate that these assumptions provide hybrid solutions that are in good agreement with reference pore-scale solutions and are able to model the transition from stable to unstable flow regimes. Our hybrid method can naturally take advantage of several adaptive strategies and allows considering pore-scale fluxes only in some regions, while Darcy fluxes are used in the rest of the domain. Moreover, since the method relies on the assumption that the relationship between coarse-scale fluxes and pressure differences is local, it can be used as a numerical tool to investigate the limits of validity of Darcy's law and to understand the link between pore-scale quantities and their corresponding Darcy-scale variables.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating for camera saturation which takes into account the variable activity in the field of view, i.e. time-dependent dead-time effects. The algorithm presented here accomplishes this task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

n this paper the iterative MSFV method is extended to include the sequential implicit simulation of time dependent problems involving the solution of a system of pressure-saturation equations. To control numerical errors in simulation results, an error estimate, based on the residual of the MSFV approximate pressure field, is introduced. In the initial time steps in simulation iterations are employed until a specified accuracy in pressure is achieved. This initial solution is then used to improve the localization assumption at later time steps. Additional iterations in pressure solution are employed only when the pressure residual becomes larger than a specified threshold value. Efficiency of the strategy and the error control criteria are numerically investigated. This paper also shows that it is possible to derive an a-priori estimate and control based on the allowed pressure-equation residual to guarantee the desired accuracy in saturation calculation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantification is a major problem when using histology to study the influence of ecological factors on tree structure. This paper presents a method to prepare and to analyse transverse sections of cambial zone and of conductive phloem in bark samples. The following paper (II) presents the automated measurement procedure. Part I here describes and discusses the preparation method, and the influence of tree age on the observed structure. Highly contrasted images of samples extracted at breast height during dormancy were analysed with an automatic image analyser. Between three young (38 years) and three old (147 years) trees, age-related differences were identified by size and shape parameters, at both cell and tissue levels. In the cambial zone, older trees had larger and more rectangular fusiform initials. In the phloem, sieve tubes were also larger, but their shape did not change and the area for sap conduction was similar in both categories. Nevertheless, alterations were limited, and demanded statistical analysis to be identified and ascertained. The physiological implications of the structural changes are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The trabecular bone score (TBS) is a gray-level textural metric that can be extracted from the two-dimensional lumbar spine dual-energy X-ray absorptiometry (DXA) image. TBS is related to bone microarchitecture and provides skeletal information that is not captured from the standard bone mineral density (BMD) measurement. Based on experimental variograms of the projected DXA image, TBS has the potential to discern differences between DXA scans that show similar BMD measurements. An elevated TBS value correlates with better skeletal microstructure; a low TBS value correlates with weaker skeletal microstructure. Lumbar spine TBS has been evaluated in cross-sectional and longitudinal studies. The following conclusions are based upon publications reviewed in this article: 1) TBS gives lower values in postmenopausal women and in men with previous fragility fractures than their nonfractured counterparts; 2) TBS is complementary to data available by lumbar spine DXA measurements; 3) TBS results are lower in women who have sustained a fragility fracture but in whom DXA does not indicate osteoporosis or even osteopenia; 4) TBS predicts fracture risk as well as lumbar spine BMD measurements in postmenopausal women; 5) efficacious therapies for osteoporosis differ in the extent to which they influence the TBS; 6) TBS is associated with fracture risk in individuals with conditions related to reduced bone mass or bone quality. Based on these data, lumbar spine TBS holds promise as an emerging technology that could well become a valuable clinical tool in the diagnosis of osteoporosis and in fracture risk assessment. © 2014 American Society for Bone and Mineral Research.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a method for segmenting white matter tracts from high angular resolution diffusion MR. images by representing the data in a 5 dimensional space of position and orientation. Whereas crossing fiber tracts cannot be separated in 3D position space, they clearly disentangle in 5D position-orientation space. The segmentation is done using a 5D level set method applied to hyper-surfaces evolving in 5D position-orientation space. In this paper we present a methodology for constructing the position-orientation space. We then show how to implement the standard level set method in such a non-Euclidean high dimensional space. The level set theory is basically defined for N-dimensions but there are several practical implementation details to consider, such as mean curvature. Finally, we will show results from a synthetic model and a few preliminary results on real data of a human brain acquired by high angular resolution diffusion MRI.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A solution of (18)F was standardised with a 4pibeta-4pigamma coincidence counting system in which the beta detector is a one-inch diameter cylindrical UPS89 plastic scintillator, positioned at the bottom of a well-type 5''x5'' NaI(Tl) gamma-ray detector. Almost full detection efficiency-which was varied downwards electronically-was achieved in the beta-channel. Aliquots of this (18)F solution were also measured using 4pigamma NaI(Tl) integral counting and Monte Carlo calculated efficiencies as well as the CIEMAT-NIST method. Secondary measurements of the same solution were also performed with an IG11 ionisation chamber whose equivalent activity is traceable to the Système International de Référence through the contribution IRA-METAS made to it in 2001; IRA's degree of equivalence was found to be close to the key comparison reference value (KCRV). The (18)F activity predicted by this coincidence system agrees closely with the ionisation chamber measurement and is compatible within one standard deviation of the other primary measurements. This work demonstrates that our new coincidence system can standardise short-lived radionuclides used in nuclear medicine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We studied the response to F+0 renography and the relative and absolute individual kidney function in neonates and < 6-mo-old infants before and after surgery for unilateral ureteropelvic junction obstruction (UJO). METHODS: The results obtained at diagnosis and after pyeloplasty for 9 children (8 boys, 1 girl; age range, 0.8-5.9 mo; mean age +/- SD, 2.4 +/- 1.5 mo) with proven unilateral UJO (i.e., affected kidney [AK]) and an unremarkable contralateral kidney (i.e., normal kidney [NK]) were evaluated and compared with a control group of 10 children (6 boys, 4 girls; age range, 0.8-2.8 mo; mean age, 1.5 +/- 0.7 mo) selected because of symmetric renal function, absence of vesicoureteral reflux or infection, and an initially dilated but not obstructed renal pelvis as proven by follow-up. Renography was performed for 20 min after injection of (123)I-hippuran (OIH) (0.5-1.0 MBq/kg) immediately followed by furosemide (1 mg/kg). The relative and absolute renal functions and the response to furosemide were measured on background-subtracted and depth-corrected renograms. The response to furosemide was quantified by an elimination index (EI), defined as the ratio of the 3- to 20-min activities: An EI > or = 3 was considered definitively normal and an EI < or = 1 definitively abnormal. If EI was equivocal (1 < EI < 3), the response to gravity-assisted drainage was used to differentiate AKs from NKs. Absolute separate renal function was measured by an accumulation index (AI), defined as the percentage of (123)I-OIH (%ID) extracted by the kidney 30-90 s after maximal cardiac activity. RESULTS: All AKs had definitively abnormal EIs at diagnosis (mean, 0.56 +/- 0.12) and were significantly lower than the EIs of the NKs (mean, 3.24 +/- 1.88) and of the 20 control kidneys (mean, 3.81 +/- 1.97; P < 0.001). The EIs of the AKs significantly improved (mean, 2.81 +/- 0.64; P < 0.05) after pyeloplasty. At diagnosis, the AIs of the AKs were significantly lower (mean, 6.31 +/- 2.33 %ID) than the AIs of the NKs (mean, 9.43 +/- 1.12 %ID) and of the control kidneys (mean, 9.05 +/- 1.17 %ID; P < 0.05). The AIs of the AKs increased at follow-up (mean, 7.81 +/- 2.23 %ID) but remained lower than those of the NKs (mean, 10.75 +/- 1.35 %ID; P < 0.05). CONCLUSION: In neonates and infants younger than 6 mo, (123)I-OIH renography with early furosemide injection (F+0) allowed us to reliably diagnose AKs and to determine if parenchymal function was normal or impaired and if it improved after surgery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The integration of geophysical data into the subsurface characterization problem has been shown in many cases to significantly improve hydrological knowledge by providing information at spatial scales and locations that is unattainable using conventional hydrological measurement techniques. The investigation of exactly how much benefit can be brought by geophysical data in terms of its effect on hydrological predictions, however, has received considerably less attention in the literature. Here, we examine the potential hydrological benefits brought by a recently introduced simulated annealing (SA) conditional stochastic simulation method designed for the assimilation of diverse hydrogeophysical data sets. We consider the specific case of integrating crosshole ground-penetrating radar (GPR) and borehole porosity log data to characterize the porosity distribution in saturated heterogeneous aquifers. In many cases, porosity is linked to hydraulic conductivity and thus to flow and transport behavior. To perform our evaluation, we first generate a number of synthetic porosity fields exhibiting varying degrees of spatial continuity and structural complexity. Next, we simulate the collection of crosshole GPR data between several boreholes in these fields, and the collection of porosity log data at the borehole locations. The inverted GPR data, together with the porosity logs, are then used to reconstruct the porosity field using the SA-based method, along with a number of other more elementary approaches. Assuming that the grid-cell-scale relationship between porosity and hydraulic conductivity is unique and known, the porosity realizations are then used in groundwater flow and contaminant transport simulations to assess the benefits and limitations of the different approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present and apply a semisupervised support vector machine based on cluster kernels for the problem of very high resolution image classification. In the proposed setting, a base kernel working with labeled samples only is deformed by a likelihood kernel encoding similarities between unlabeled examples. The resulting kernel is used to train a standard support vector machine (SVM) classifier. Experiments carried out on very high resolution (VHR) multispectral and hyperspectral images using very few labeled examples show the relevancy of the method in the context of urban image classification. Its simplicity and the small number of parameters involved make it versatile and workable by unexperimented users.