29 resultados para k-Means algorithm
Resumo:
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Resumo:
Abstract Radiation metabolomics employing mass spectral technologies represents a plausible means of high-throughput minimally invasive radiation biodosimetry. A simplified metabolomics protocol is described that employs ubiquitous gas chromatography-mass spectrometry and open source software including random forests machine learning algorithm to uncover latent biomarkers of 3 Gy gamma radiation in rats. Urine was collected from six male Wistar rats and six sham-irradiated controls for 7 days, 4 prior to irradiation and 3 after irradiation. Water and food consumption, urine volume, body weight, and sodium, potassium, calcium, chloride, phosphate and urea excretion showed major effects from exposure to gamma radiation. The metabolomics protocol uncovered several urinary metabolites that were significantly up-regulated (glyoxylate, threonate, thymine, uracil, p-cresol) and down-regulated (citrate, 2-oxoglutarate, adipate, pimelate, suberate, azelaate) as a result of radiation exposure. Thymine and uracil were shown to derive largely from thymidine and 2'-deoxyuridine, which are known radiation biomarkers in the mouse. The radiation metabolomic phenotype in rats appeared to derive from oxidative stress and effects on kidney function. Gas chromatography-mass spectrometry is a promising platform on which to develop the field of radiation metabolomics further and to assist in the design of instrumentation for use in detecting biological consequences of environmental radiation release.
Resumo:
The hippocampal formation (HF) of healthy control subjects and schizophrenic patients was examined using an MRI experiment that implements sequences for relaxometry and magnetization transfer (MT) quantification. In addition to the semi-quantitative magnetization transfer ratio (MTR), all of the observable properties of the binary spin bath model were included. The study demonstrates that, in contrast to the MTR, quantitative MT parameters (especially the T2 relaxation time of restricted protons, T2b) are capable to differentiate functionally significant subregions within the HF. The MT methodology appears to be a promising new tool for the differential microstructural evaluation of the HF in neuropsychiatric disorders accompanied by memory disturbances.
Resumo:
Pedicle hooks which are used as an anchorage for posterior spinal instrumentation may be subjected to considerable three-dimensional forces. In order to achieve stronger attachment to the implantation site, hooks using screws for additional fixation have been developed. The failure loads and mechanisms of three such devices have been experimentally determined on human thoracic vertebrae: the Universal Spine System (USS) pedicle hook with one screw, a prototype pedicle hook with two screws and the Cotrel-Dubousset (CD) pedicle hook with screw. The USS hooks use 3.2-mm self-tapping fixation screws which pass into the pedicle, whereas the CD hook is stabilised with a 3-mm set screw pressing against the superior part of the facet joint. A clinically established 5-mm pedicle screw was tested for comparison. A matched pair experimental design was implemented to evaluate these implants in constrained (series I) and rotationally unconstrained (series II) posterior pull-out tests. In the constrained tests the pedicle screw was the strongest implant, with an average pull-out force of 1650 N (SD 623 N). The prototype hook was comparable, with an average failure load of 1530 N (SD 414 N). The average pull-out force of the USS hook with one screw was 910 N (SD 243 N), not significantly different to the CD hook's average failure load of 740 N (SD 189 N). The result of the unconstrained tests were similar, with the prototype hook being the strongest device (average 1617 N, SD 652 N). However, in this series the difference in failure load between the USS hook with one screw and the CD hook was significant. Average failure loads of 792 N (SD 184 N) for the USS hook and 464 N (SD 279 N) for the CD hook were measured. A pedicular fracture in the plane of the fixation screw was the most common failure mode for USS hooks.(ABSTRACT TRUNCATED AT 250 WORDS)
Resumo:
A new anisotropic elastic-viscoplastic damage constitutive model for bone is proposed using an eccentric elliptical yield criterion and nonlinear isotropic hardening. A micromechanics-based multiscale homogenization scheme proposed by Reisinger et al. is used to obtain the effective elastic properties of lamellar bone. The dissipative process in bone is modeled as viscoplastic deformation coupled to damage. The model is based on an orthotropic ecuntric elliptical criterion in stress space. In order to simplify material identification, an eccentric elliptical isotropic yield surface was defined in strain space, which is transformed to a stress-based criterion by means of the damaged compliance tensor. Viscoplasticity is implemented by means of the continuous Perzyna formulation. Damage is modeled by a scalar function of the accumulated plastic strain D(κ) , reducing all element s of the stiffness matrix. A polynomial flow rule is proposed in order to capture the rate-dependent post-yield behavior of lamellar bone. A numerical algorithm to perform the back projection on the rate-dependent yield surface has been developed and implemented in the commercial finite element solver Abaqus/Standard as a user subroutine UMAT. A consistent tangent operator has been derived and implemented in order to ensure quadratic convergence. Correct implementation of the algorithm, convergence, and accuracy of the tangent operator was tested by means of strain- and stress-based single element tests. A finite element simulation of nano- indentation in lamellar bone was finally performed in order to show the abilities of the newly developed constitutive model.
Resumo:
Cognitive event-related potentials (ERPs) are widely employed in the study of dementive disorders. The morphology of averaged response is known to be under the influence of neurodegenerative processes and exploited for diagnostic purposes. This work is built over the idea that there is additional information in the dynamics of single-trial responses. We introduce a novel way to detect mild cognitive impairment (MCI) from the recordings of auditory ERP responses. Using single trial responses from a cohort of 25 amnestic MCI patients and a group of age-matched controls, we suggest a descriptor capable of encapsulating single-trial (ST) response dynamics for the benefit of early diagnosis. A customized vector quantization (VQ) scheme is first employed to summarize the overall set of ST-responses by means of a small-sized codebook of brain waves that is semantically organized. Each ST-response is then treated as a trajectory that can be encoded as a sequence of code vectors. A subject's set of responses is consequently represented as a histogram of activated code vectors. Discriminating MCI patients from healthy controls is based on the deduced response profiles and carried out by means of a standard machine learning procedure. The novel response representation was found to improve significantly MCI detection with respect to the standard alternative representation obtained via ensemble averaging (13% in terms of sensitivity and 6% in terms of specificity). Hence, the role of cognitive ERPs as biomarker for MCI can be enhanced by adopting the delicate description of our VQ scheme.
Resumo:
A 45-year-old man was admitted to the emergency department because of twitching of the head. The patient took a tablet of sumatriptan every 3-4 h because of increasing head pain after a car accident. Owing to depression, the patient was on long-term treatment with venlafaxine. The patient presented as hypertensive, tachycardic, with dyskinesia and spontaneous myoclonic movements of the right sternocleidomastoid muscle. In a CT scan of the head and cervical spine any fractures, bleeding or damage of the vessels after the accident could be ruled out. After discontinuation of all serotonergic agents, administration of lorazepam symptoms resolved 24 h after the last intake of sumatriptan. Serotonin syndrome is a clinical diagnosis, which requires a high-index of diagnostic suspicion. Clinical features include a broad spectrum of symptoms ranging from mild to life-threatening manifestations. Management is based on removal of precipitating drugs and symptomatic care including benzodiazepines.
Resumo:
Postpartum hemorrhage (PPH) is one of the main causes of maternal deaths even in industrialized countries. It represents an emergency situation which necessitates a rapid decision and in particular an exact diagnosis and root cause analysis in order to initiate the correct therapeutic measures in an interdisciplinary cooperation. In addition to established guidelines, the benefits of standardized therapy algorithms have been demonstrated. A therapy algorithm for the obstetric emergency of postpartum hemorrhage in the German language is not yet available. The establishment of an international (Germany, Austria and Switzerland D-A-CH) "treatment algorithm for postpartum hemorrhage" was an interdisciplinary project based on the guidelines of the corresponding specialist societies (anesthesia and intensive care medicine and obstetrics) in the three countries as well as comparable international algorithms for therapy of PPH.The obstetrics and anesthesiology personnel must possess sufficient expertise for emergency situations despite lower case numbers. The rarity of occurrence for individual patients and the life-threatening situation necessitate a structured approach according to predetermined treatment algorithms. This can then be carried out according to the established algorithm. Furthermore, this algorithm presents the opportunity to train for emergency situations in an interdisciplinary team.
Resumo:
The International Surface Temperature Initiative (ISTI) is striving towards substantively improving our ability to robustly understand historical land surface air temperature change at all scales. A key recently completed first step has been collating all available records into a comprehensive open access, traceable and version-controlled databank. The crucial next step is to maximise the value of the collated data through a robust international framework of benchmarking and assessment for product intercomparison and uncertainty estimation. We focus on uncertainties arising from the presence of inhomogeneities in monthly mean land surface temperature data and the varied methodological choices made by various groups in building homogeneous temperature products. The central facet of the benchmarking process is the creation of global-scale synthetic analogues to the real-world database where both the "true" series and inhomogeneities are known (a luxury the real-world data do not afford us). Hence, algorithmic strengths and weaknesses can be meaningfully quantified and conditional inferences made about the real-world climate system. Here we discuss the necessary framework for developing an international homogenisation benchmarking system on the global scale for monthly mean temperatures. The value of this framework is critically dependent upon the number of groups taking part and so we strongly advocate involvement in the benchmarking exercise from as many data analyst groups as possible to make the best use of this substantial effort.
Resumo:
Given a short-arc optical observation with estimated angle-rates, the admissible region is a compact region in the range / range-rate space defined such that all likely and relevant orbits are contained within it. An alternative boundary value problem formulation has recently been proposed where range / range hypotheses are generated with two angle measurements from two tracks as input. In this paper, angle-rate information is reintroduced as a means to eliminate hypotheses by bounding their constants of motion before a more computationally costly Lambert solver or differential correction algorithm is run.
Resumo:
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). In this context both the correct associations among the observations, and the orbits of the objects have to be determined. The complexity of the MTT problem is defined by its dimension S. Where S stands for the number of ’fences’ used in the problem, each fence consists of a set of observations that all originate from dierent targets. For a dimension of S ˃ the MTT problem becomes NP-hard. As of now no algorithm exists that can solve an NP-hard problem in an optimal manner within a reasonable (polynomial) computation time. However, there are algorithms that can approximate the solution with a realistic computational e ort. To this end an Elitist Genetic Algorithm is implemented to approximately solve the S ˃ MTT problem in an e cient manner. Its complexity is studied and it is found that an approximate solution can be obtained in a polynomial time. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to e ciently process large data sets with minimal manual intervention.
Resumo:
Abstract: Near-infrared spectroscopy (NIRS) enables the non-invasive measurement of changes in hemodynamics and oxygenation in tissue. Changes in light-coupling due to movement of the subject can cause movement artifacts (MAs) in the recorded signals. Several methods have been developed so far that facilitate the detection and reduction of MAs in the data. However, due to fixed parameter values (e.g., global threshold) none of these methods are perfectly suitable for long-term (i.e., hours) recordings or were not time-effective when applied to large datasets. We aimed to overcome these limitations by automation, i.e., data adaptive thresholding specifically designed for long-term measurements, and by introducing a stable long-term signal reconstruction. Our new technique (“acceleration-based movement artifact reduction algorithm”, AMARA) is based on combining two methods: the “movement artifact reduction algorithm” (MARA, Scholkmann et al. Phys. Meas. 2010, 31, 649–662), and the “accelerometer-based motion artifact removal” (ABAMAR, Virtanen et al. J. Biomed. Opt. 2011, 16, 087005). We describe AMARA in detail and report about successful validation of the algorithm using empirical NIRS data, measured over the prefrontal cortex in adolescents during sleep. In addition, we compared the performance of AMARA to that of MARA and ABAMAR based on validation data.