942 resultados para Subpixel precision
Resumo:
Recent technical advances have enabled for the first time, reliable in vitro culture of prostate cancer samples as prostate cancer organoids. This breakthrough provides the significant possibility of high throughput drug screening covering the spectrum of prostate cancer phenotypes seen clinically. These advances will enable precision medicine to become a reality, allowing patient samples to be screened for effective therapeutics ex vivo, with tailoring of treatments specific to that individual. This will hopefully lead to enhanced clinical outcomes, avoid morbidity due to ineffective therapies and improve the quality of life in men with advanced prostate cancer.
Resumo:
A key challenge of wide area kinematic positioning is to overcome the effects of the varying hardware biases in code signals of the BeiDou system. Based on three geometryfree/ionosphere-free combinations, the elevation-dependent code biases are modelled for all BeiDou satellites. Results from the data sets of 30-day for 5 baselines of 533 to 2545 km demonstrate that the wide-lane (WL) integer-fixing success rates of 98% to 100% can be achieved within 25 min. Under the condition of HDOP of less than 2, the overall RMS statistics show that ionospheric-free WL single-epoch solutions achieve 24 to 50 cm in the horizontal direction. Smoothing processing over the moving window of 20 min reduces the RMS values by a factor of about 2. Considering distance-independent nature, the above results show the potential that reliable and high precision positioning services could be provided in a wide area based on a sparsely distributed ground network.
Resumo:
Skew correction of complex document images is a difficult task. We propose an edge-based connected component approach for robust skew correction of documents with complex layout and content. The algorithm essentially consists of two steps - an 'initialization' step to determine the image orientation from the centroids of the connected components and a 'search' step to find the actual skew of the image. During initialization, we choose two different sets of points regularly spaced across the the image, one from the left to right and the other from top to bottom. The image orientation is determined from the slope between the two succesive nearest neighbors of each of the points in the chosen set. The search step finds succesive nearest neighbors that satisfy the parameters obtained in the initialization step. The final skew is determined from the slopes obtained in the 'search' step. Unlike other connected component based methods, the proposed method does not require any binarization step that generally precedes connected component analysis. The method works well for scanned documents with complex layout of any skew with a precision of 0.5 degrees.
Resumo:
Purpose: To examine the effects of gaze position and optical blur, similar to that used in multifocal corrections, on stepping accuracy for a precision stepping task among older adults. Methods: Nineteen healthy older adults (mean age, 71.6 +/- 8.8 years) with normal vision performed a series of precision stepping tasks onto a fixed target. The stepping tasks were performed using a repeated-measures design for three gaze positions (fixating on the stepping target as well as 30 and 60 cm farther forward of the stepping target) and two visual conditions (best-corrected vision and with +2.50DS blur). Participants' gaze position was tracked using a head-mounted eye tracker. Absolute, anteroposterior, and mediolateral foot placement errors and within-subject foot placement variability were calculated from the locations of foot and floor-mounted retroreflective markers captured by flash photography of the final foot position. Results: Participants made significantly larger absolute and anteroposterior foot placement errors and exhibited greater foot placement variability when their gaze was directed farther forward of the stepping target. Blur led to significantly increased absolute and anteroposterior foot placement errors and increased foot placement variability. Furthermore, blur differentially increased the absolute and anteroposterior foot placement errors and variability when gaze was directed 60 cm farther forward of the stepping target. Conclusions: Increasing gaze position farther ahead from stepping locations and the presence of blur negatively impact the stepping accuracy of older adults. These findings indicate that blur, similar to that used in multifocal corrections, has the potential to increase the risk of trips and falls among older populations when negotiating challenging environments where precision stepping is required, particularly as gaze is directed farther ahead from stepping locations when walking.
Resumo:
The TOTEM experiment at the LHC will measure the total proton-proton cross-section with a precision better than 1%, elastic proton scattering over a wide range in momentum transfer -t= p^2 theta^2 up to 10 GeV^2 and diffractive dissociation, including single, double and central diffraction topologies. The total cross-section will be measured with the luminosity independent method that requires the simultaneous measurements of the total inelastic rate and the elastic proton scattering down to four-momentum transfers of a few 10^-3 GeV^2, corresponding to leading protons scattered in angles of microradians from the interaction point. This will be achieved using silicon microstrip detectors, which offer attractive properties such as good spatial resolution (<20 um), fast response (O(10ns)) to particles and radiation hardness up to 10^14 "n"/cm^2. This work reports about the development of an innovative structure at the detector edge reducing the conventional dead width of 0.5-1 mm to 50-60 um, compatible with the requirements of the experiment.
Resumo:
By detecting leading protons produced in the Central Exclusive Diffractive process, p+p → p+X+p, one can measure the missing mass, and scan for possible new particle states such as the Higgs boson. This process augments - in a model independent way - the standard methods for new particle searches at the Large Hadron Collider (LHC) and will allow detailed analyses of the produced central system, such as the spin-parity properties of the Higgs boson. The exclusive central diffractive process makes possible precision studies of gluons at the LHC and complements the physics scenarios foreseen at the next e+e− linear collider. This thesis first presents the conclusions of the first systematic analysis of the expected precision measurement of the leading proton momentum and the accuracy of the reconstructed missing mass. In this initial analysis, the scattered protons are tracked along the LHC beam line and the uncertainties expected in beam transport and detection of the scattered leading protons are accounted for. The main focus of the thesis is in developing the necessary radiation hard precision detector technology for coping with the extremely demanding experimental environment of the LHC. This will be achieved by using a 3D silicon detector design, which in addition to the radiation hardness of up to 5×10^15 neutrons/cm2, offers properties such as a high signal-to- noise ratio, fast signal response to radiation and sensitivity close to the very edge of the detector. This work reports on the development of a novel semi-3D detector design that simplifies the 3D fabrication process, but conserves the necessary properties of the 3D detector design required in the LHC and in other imaging applications.
Resumo:
This thesis focuses on the issue of testing sleepiness quantitatively. The issue is relevant to policymakers concerned with traffic- and occupational safety; such testing provides a tool for safety legislation and -surveillance. The findings of this thesis provide guidelines for a posturographic sleepiness tester. Sleepiness ensuing from staying awake merely 17 h impairs our performance as much as the legally proscribed blood alcohol concentration 0.5 does. Hence, sleepiness is a major risk factor in transportation and occupational accidents. The lack of convenient, commercial sleepiness tests precludes testing impending sleepiness levels contrary to simply breath testing for alcohol intoxication. Posturography is a potential sleepiness test, since clinical diurnal balance testing suggests the hypothesis that time awake could be posturographically estimable. Relying on this hypothesis this thesis examines posturographic sleepiness testing for instrumentation purposes. Empirical results from 63 subjects for whom we tested balance with a force platform during wakefulness for maximum 36 h show that sustained wakefulness impairs balance. The results show that time awake is posturographically estimable with 88% accuracy and 97% precision which validates our hypothesis. Results also show that balance scores tested at 13:30 hours serve as a threshold to detect excessive sleepiness. Analytical results show that the test length has a marked effect on estimation accuracy: 18 s tests suffice to identify sleepiness related balance changes, but trades off some of the accuracy achieved with 30 s tests. The procedure to estimate time awake relies on equating the subject s test score to a reference table (comprising balance scores tested during sustained wakefulness, regressed against time awake). Empirical results showed that sustained wakefulness explains 60% of the diurnal balance variations, whereas the time of day explains 40% of the balance variations. The latter fact implies that time awake estimations also must rely on knowing the local times of both test and reference scores.
Resumo:
Frequency multiplication (FM) can be used to design low power frequency synthesizers. This is achieved by running the VCO at a much reduced frequency, while employing a power efficient frequency multiplier, and also thereby eliminating the first few dividers. Quadrature signals can be generated by frequency- multiplying low frequency I/Q signals, however this also multiplies the quadrature error of these signals. Another way is generating additional edges from the low-frequency oscillator (LFO) and develop a quadrature FM. This makes the I-Q precision heavily dependent on process mismatches in the ring oscillator. In this paper we examine the use of fewer edges from LFO and a single stage polyphase filter to generate approximate quadrature signals, which is then followed by an injection-locked quadrature VCO to generate high- precision I/Q signals. Simulation comparisons with the existing approach shows that the proposed method offers very good phase accuracy of 0.5deg with only a modest increase in power dissipation for 2.4 GHz IEEE 802.15.4 standard using UMC 0.13 mum RFCMOS technology.
Resumo:
Currently, we live in an era characterized by the completion and first runs of the LHC accelerator at CERN, which is hoped to provide the first experimental hints of what lies beyond the Standard Model of particle physics. In addition, the last decade has witnessed a new dawn of cosmology, where it has truly emerged as a precision science. Largely due to the WMAP measurements of the cosmic microwave background, we now believe to have quantitative control of much of the history of our universe. These two experimental windows offer us not only an unprecedented view of the smallest and largest structures of the universe, but also a glimpse at the very first moments in its history. At the same time, they require the theorists to focus on the fundamental challenges awaiting at the boundary of high energy particle physics and cosmology. What were the contents and properties of matter in the early universe? How is one to describe its interactions? What kind of implications do the various models of physics beyond the Standard Model have on the subsequent evolution of the universe? In this thesis, we explore the connection between in particular supersymmetric theories and the evolution of the early universe. First, we provide the reader with a general introduction to modern day particle cosmology from two angles: on one hand by reviewing our current knowledge of the history of the early universe, and on the other hand by introducing the basics of supersymmetry and its derivatives. Subsequently, with the help of the developed tools, we direct the attention to the specific questions addressed in the three original articles that form the main scientific contents of the thesis. Each of these papers concerns a distinct cosmological problem, ranging from the generation of the matter-antimatter asymmetry to inflation, and finally to the origin or very early stage of the universe. They nevertheless share a common factor in their use of the machinery of supersymmetric theories to address open questions in the corresponding cosmological models.
Resumo:
Agricultural pests are responsible for millions of dollars in crop losses and management costs every year. In order to implement optimal site-specific treatments and reduce control costs, new methods to accurately monitor and assess pest damage need to be investigated. In this paper we explore the combination of unmanned aerial vehicles (UAV), remote sensing and machine learning techniques as a promising technology to address this challenge. The deployment of UAVs as a sensor platform is a rapidly growing field of study for biosecurity and precision agriculture applications. In this experiment, a data collection campaign is performed over a sorghum crop severely damaged by white grubs (Coleoptera: Scarabaeidae). The larvae of these scarab beetles feed on the roots of plants, which in turn impairs root exploration of the soil profile. In the field, crop health status could be classified according to three levels: bare soil where plants were decimated, transition zones of reduced plant density and healthy canopy areas. In this study, we describe the UAV platform deployed to collect high-resolution RGB imagery as well as the image processing pipeline implemented to create an orthoimage. An unsupervised machine learning approach is formulated in order to create a meaningful partition of the image into each of the crop levels. The aim of the approach is to simplify the image analysis step by minimizing user input requirements and avoiding the manual data labeling necessary in supervised learning approaches. The implemented algorithm is based on the K-means clustering algorithm. In order to control high-frequency components present in the feature space, a neighbourhood-oriented parameter is introduced by applying Gaussian convolution kernels prior to K-means. The outcome of this approach is a soft K-means algorithm similar to the EM algorithm for Gaussian mixture models. The results show the algorithm delivers decision boundaries that consistently classify the field into three clusters, one for each crop health level. The methodology presented in this paper represents a venue for further research towards automated crop damage assessments and biosecurity surveillance.
Resumo:
In uplink OFDMA, carrier frequency offsets (CFO) and/or timing offsets (TO) of other users with respect to a desired user can cause multiuser interference (MUI). In practical uplink OFDMA systems (e.g., IEEE 802.16e standard), effect of this MUI is made acceptably small by requiring that frequency/timing alignment be achieved at the receiver with high precision (e.g., CFO must be within 1 % of the subcarrier spacing and TO must be within 1/8th of the cyclic prefix duration in IEEE 802.16e), which is realized using complex closed-loop frequency/timing correction between the transmitter and the receiver. An alternate open-loop approach to handle the MUI induced by large CFOs and TOs is to employ interference cancellation techniques at the receiver. In this paper, we first analytically characterize the degradation in the average output signal-to-interference ratio (SIR) due to the combined effect of large CFOs and TOs in uplink OFDMA. We then propose a parallel interference canceller (PIC) for the mitigation of interference due to CFOs and TOs in this system. We show that the proposed PIC effectively mitigates the performance loss due to CFO/TO induced interference in uplink OFDMA.
Resumo:
The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.
Resumo:
The electroweak theory is the part of the standard model of particle physics that describes the weak and electromagnetic interactions between elementary particles. Since its formulation almost 40 years ago, it has been experimentally verified to a high accuracy and today it has a status as one of the cornerstones of particle physics. Thermodynamics of electroweak physics has been studied ever since the theory was written down and the features the theory exhibits at extreme conditions remain an interesting research topic even today. In this thesis, we consider some aspects of electroweak thermodynamics. Specifically, we compute the pressure of the standard model to high precision and study the structure of the electroweak phase diagram when finite chemical potentials for all the conserved particle numbers in the theory are introduced. In the first part of the thesis, the theory, methods and essential results from the computations are introduced. The original research publications are reprinted at the end.
Resumo:
Quantum chromodynamics (QCD) is the theory describing interaction between quarks and gluons. At low temperatures, quarks are confined forming hadrons, e.g. protons and neutrons. However, at extremely high temperatures the hadrons break apart and the matter transforms into plasma of individual quarks and gluons. In this theses the quark gluon plasma (QGP) phase of QCD is studied using lattice techniques in the framework of dimensionally reduced effective theories EQCD and MQCD. Two quantities are in particular interest: the pressure (or grand potential) and the quark number susceptibility. At high temperatures the pressure admits a generalised coupling constant expansion, where some coefficients are non-perturbative. We determine the first such contribution of order g^6 by performing lattice simulations in MQCD. This requires high precision lattice calculations, which we perform with different number of colors N_c to obtain N_c-dependence on the coefficient. The quark number susceptibility is studied by performing lattice simulations in EQCD. We measure both flavor singlet (diagonal) and non-singlet (off-diagonal) quark number susceptibilities. The finite chemical potential results are optained using analytic continuation. The diagonal susceptibility approaches the perturbative result above 20T_c$, but below that temperature we observe significant deviations. The results agree well with 4d lattice data down to temperatures 2T_c.
Resumo:
Accelerator mass spectrometry (AMS) is an ultrasensitive technique for measuring the concentration of a single isotope. The electric and magnetic fields of an electrostatic accelerator system are used to filter out other isotopes from the ion beam. The high velocity means that molecules can be destroyed and removed from the measurement background. As a result, concentrations down to one atom in 10^16 atoms are measurable. This thesis describes the construction of the new AMS system in the Accelerator Laboratory of the University of Helsinki. The system is described in detail along with the relevant ion optics. System performance and some of the 14C measurements done with the system are described. In a second part of the thesis, a novel statistical model for the analysis of AMS data is presented. Bayesian methods are used in order to make the best use of the available information. In the new model, instrumental drift is modelled with a continuous first-order autoregressive process. This enables rigorous normalization to standards measured at different times. The Poisson statistical nature of a 14C measurement is also taken into account properly, so that uncertainty estimates are much more stable. It is shown that, overall, the new model improves both the accuracy and the precision of AMS measurements. In particular, the results can be improved for samples with very low 14C concentrations or measured only a few times.