924 resultados para Limited Kinematic


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An important part of computed tomography is the calculation of a three-dimensional reconstruction of an object from series of X-ray images. Unfortunately, some applications do not provide sufficient X-ray images. Then, the reconstructed objects no longer truly represent the original. Inside of the volumes, the accuracy seems to vary unpredictably. In this paper, we introduce a novel method to evaluate any reconstruction, voxel by voxel. The evaluation is based on a sophisticated probabilistic handling of the measured X-rays, as well as the inclusion of a priori knowledge about the materials that the object receiving the X-ray examination consists of. For each voxel, the proposed method outputs a numerical value that represents the probability of existence of a predefined material at the position of the voxel while doing X-ray. Such a probabilistic quality measure was lacking so far. In our experiment, false reconstructed areas get detected by their low probability. In exact reconstructed areas, a high probability predominates. Receiver Operating Characteristics not only confirm the reliability of our quality measure but also demonstrate that existing methods are less suitable for evaluating a reconstruction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The status of five species of commercially exploited sharks within the Great Barrier Reef Marine Park (GBRMP) and south-east Queensland was assessed using a data-limited approach. Annual harvest rate, U, estimated empirically from tagging between 2011 and 2013, was compared with an analytically-derived proxy for optimal equilibrium harvest rate, UMSY Lim. Median estimates of U for three principal retained species, Australian blacktip shark, Carcharhinus tilstoni, spot-tail shark, Carcharhinus sorrah, and spinner shark, Carcharhinus brevipinna, were 0.10, 0.06 and 0.07 year-1, respectively. Median U for two retained, non-target species, pigeye shark, Carcharhinus amboinensis and Australian sharpnose shark, Rhizoprionodon taylori, were 0.27 and 0.01 year-1, respectively. For all species except the Australian blacktip the median ratio of U/UMSY Lim was <1. The high vulnerability of this species to fishing combined with life history characteristics meant UMSY Lim was low (0.04-0.07 year-1) and that U/UMSY Lim was likely to be > 1. Harvest of the Australian blacktip shark above UMSY could place this species at a greater risk of localised depletion in parts of the GBRMP. Results of the study indicated that much higher catches, and presumably higher U, during the early 2000s were likely unsustainable. The unexpectedly high level of U on the pigeye shark indicated that output-based management controls may not have been effective in reducing harvest levels on all species, particularly those caught incidentally by other fishing sectors including the recreational sector. © 2016 Elsevier B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the first part of this thesis we search for beyond the Standard Model physics through the search for anomalous production of the Higgs boson using the razor kinematic variables. We search for anomalous Higgs boson production using proton-proton collisions at center of mass energy √s=8 TeV collected by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to an integrated luminosity of 19.8 fb-1.

In the second part we present a novel method for using a quantum annealer to train a classifier to recognize events containing a Higgs boson decaying to two photons. We train that classifier using simulated proton-proton collisions at √s=8 TeV producing either a Standard Model Higgs boson decaying to two photons or a non-resonant Standard Model process that produces a two photon final state.

The production mechanisms of the Higgs boson are precisely predicted by the Standard Model based on its association with the mechanism of electroweak symmetry breaking. We measure the yield of Higgs bosons decaying to two photons in kinematic regions predicted to have very little contribution from a Standard Model Higgs boson and search for an excess of events, which would be evidence of either non-standard production or non-standard properties of the Higgs boson. We divide the events into disjoint categories based on kinematic properties and the presence of additional b-quarks produced in the collisions. In each of these disjoint categories, we use the razor kinematic variables to characterize events with topological configurations incompatible with typical configurations found from standard model production of the Higgs boson.

We observe an excess of events with di-photon invariant mass compatible with the Higgs boson mass and localized in a small region of the razor plane. We observe 5 events with a predicted background of 0.54 ± 0.28, which observation has a p-value of 10-3 and a local significance of 3.35σ. This background prediction comes from 0.48 predicted non-resonant background events and 0.07 predicted SM higgs boson events. We proceed to investigate the properties of this excess, finding that it provides a very compelling peak in the di-photon invariant mass distribution and is physically separated in the razor plane from predicted background. Using another method of measuring the background and significance of the excess, we find a 2.5σ deviation from the Standard Model hypothesis over a broader range of the razor plane.

In the second part of the thesis we transform the problem of training a classifier to distinguish events with a Higgs boson decaying to two photons from events with other sources of photon pairs into the Hamiltonian of a spin system, the ground state of which is the best classifier. We then use a quantum annealer to find the ground state of this Hamiltonian and train the classifier. We find that we are able to do this successfully in less than 400 annealing runs for a problem of median difficulty at the largest problem size considered. The networks trained in this manner exhibit good classification performance, competitive with the more complicated machine learning techniques, and are highly resistant to overtraining. We also find that the nature of the training gives access to additional solutions that can be used to improve the classification performance by up to 1.2% in some regions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dynamic knee valgus is a multi-planar motion that has been associated with anterior cruciate ligament injuries and patellofemoral pain syndrome. Clinical assessment of dynamic knee valgus can be made by looking for the visual appearance of excessive medial knee displacement (MKD) in the double-leg squat (DLS). The purpose of this dissertation was to identify the movement patterns and neuromuscular strategies associated with MKD during the DLS. Twenty-four control subjects and eight individuals showing MKD during the DLS participated in the study. Significant differences were verified between subjects that demonstrated MKD and a control (CON) group for the eletromyographic amplitude of adductor magnus, biceps femoris, vastus lateralis and vastus medialis muscles (p < 0.05), during the descending phase of the DLS. During the ascending phase were found group differences for adductor magnus and rectus femoris muscles (p < 0.05). Results from kinematic analysis revealed higher minimum and maximum values of ankle abduction and knee internal rotation angles (p < 0.05) for the MKD group. Also, individuals showing excessive MKD had higher hip adduction/abduction excursion. Our results suggested that higher tibial internal rotation and knee internal rotation angles in the initial position of the DLS are associated with MKD. The neuromuscular strategies that contributed to MKD were higher adductor magnus activation, whereas biceps femoris, vastus lateralis and vastus medialis activated more to stabilize the knee in response to the internal rotation moment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O princípio do posicionamento por GNSS baseia-se, resumidamente, na resolução de um problema matemático que envolve a observação das distâncias do utilizador a um conjunto de satélites com coordenadas conhecidas. A posição resultante pode ser calculada em modo absoluto ou relativo. O posicionamento absoluto necessita apenas de um recetor para a determinação da posição. Por sua vez, o posicionamento relativo implica a utilização de estações de referência e envolve a utilização de mais recetores para além do pertencente ao próprio utilizador. Assim, os métodos mais utilizados na determinação da posição de uma plataforma móvel, com exatidão na ordem dos centímetros, baseiam-se neste último tipo de posicionamento. Contudo, têm a desvantagem de estarem dependentes de estações de referência, com um alcance limitado, e requerem observações simultâneas dos mesmos satélites por parte da estação e do recetor. Neste sentido foi desenvolvida uma nova metodologia de posicionamento GNSS em modo absoluto, através da modelação ou remoção dos erros associados a cada componente das equações de observação, da utilização de efemérides precisas e correções aos relógios dos satélites. Este método de posicionamento tem a designação Precise Point Positioning (PPP) e permite manter uma elevada exatidão, equivalente à dos sistemas de posicionamento relativo. Neste trabalho, após um estudo aprofundado do tema, foi desenvolvida uma aplicação PPP, de índole académica, com recurso à biblioteca de classes C++ do GPS Toolkit, que permite determinar a posição e velocidade do recetor em modo cinemático e em tempo real. Esta aplicação foi ensaiada utilizando dados de observação de uma estação estática (processados em modo cinemático) e de uma estação em movimento instalada no NRP Auriga. Os resultados obtidos permitiram uma exatidão para a posição na ordem decimétrica e para a velocidade na ordem do cm/s.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Transcatheter closure of atrial septal defects (ASD) has been accepted world-wide as an alternative to surgical closure with excellent results. This interventional, non-surgical technique plays an important role in the treatment of ASD mostly in the developing world where resources are limited. Objectives: To report the outcomes and short term follow-up of transcatheter closure of ASD over a 12-year period at our institution with limited resources. Patients and Methods: This retrospective study included all patients with the diagnosis of secundum ASD and significant shunting (Qp/Qs > 1.5:1) as well as dilated right atrium and right ventricle who had transcatheter closure at Integrated Cardiovascular Center (PJT), Dr. Cipto Mangunkusumo Hospital between October 2002 and October 2014. One hundred fifty-two patients enrolled in this study were candidates for device closure. Right and left heart cardiac catheterization was performed before the procedure. All patients underwent physical examination, ECG, chest X-ray and transthoracal echocardiography (TTE) prior to device implantation. Results: A total of 152 patients with significant ASD underwent device implantation. Subjects’ age ranged from 0.63 to 69.6 years, with median 9.36 years and mean 16.30 years. They consisted of 33 (21.7%) males and 119 (78.3%) females, with mean body weight of 29.9 kg (range 8 to 75; SD 18.2). The device was successfully implanted in 150 patients where the majority of cases received the Amplatzer septal occluder (147/150; 98%) and the others received the Heart Lifetech ASD occluder (3/150, 2%), whereas two other cases were not suitable for device closure and we decided for surgical closure. The mean ASD size was 19.75 (range 14 - 25) mm. During the procedure, 5 (4.9%) patients had bradycardia and 3 (2.9%) patients had supraventricular tachycardia (SVT), all of which resolved. Conclusions: In our center with limited facilities and manpower, transcatheter closure of atrial septal defect was effective and safe as an alternative treatment to surgery. The outcome and short-term follow-up revealed excellent results, but long-term follow-up is needed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis details the design and applications of a terahertz (THz) frequency comb spectrometer. The spectrometer employs two offset locked Ti:Sapphire femtosecond oscillators with repetition rates of approximately 80 MHz, offset locked at 100 Hz to continuously sample a time delay of 12.5 ns at a maximum time delay resolution of 15.6 fs. These oscillators emit continuous pulse trains, allowing the generation of a THz pulse train by the master, or pump, oscillator and the sampling of this THz pulse train by the slave, or probe, oscillator via the electro-optic effect. Collecting a train of 16 consecutive THz pulses and taking the Fourier transform of this pulse train produces a decade-spanning frequency comb, from 0.25 to 2.5 THz, with a comb tooth width of 5 MHz and a comb tooth spacing of ~80 MHz. This frequency comb is suitable for Doppler-limited rotational spectroscopy of small molecules. Here, the data from 68 individual scans at slightly different pump oscillator repetition rates were combined, producing an interleaved THz frequency comb spectrum, with a maximum interval between comb teeth of 1.4 MHz, enabling THz frequency comb spectroscopy.

The accuracy of the THz frequency comb spectrometer was tested, achieving a root mean square error of 92 kHz measuring selected absorption center frequencies of water vapor at 10 mTorr, and a root mean square error of 150 kHz in measurements of a K-stack of acetonitrile. This accuracy is sufficient for fitting of measured transitions to a model Hamiltonian to generate a predicted spectrum for molecules of interest in the fields of astronomy and physical chemistry. As such, the rotational spectra of methanol and methanol-OD were acquired by the spectrometer. Absorptions from 1.3 THz to 2.0 THz were compared to JPL catalog data for methanol and the spectrometer achieved an RMS error of 402 kHz, improving to 303 kHz when excluding low signal-to-noise absorptions. This level of accuracy compares favorably with the ~100 kHz accuracy achieved by JPL frequency multiplier submillimeter spectrometers. Additionally, the relative intensity performance of the THz frequency comb spectrometer is linear across the entire decade-spanning bandwidth, making it the preferred instrument for recovering lineshapes and taking absolute intensity measurements in the THz region. The data acquired by the Terahertz Frequency Comb Spectrometer for methanol-OD is of comparable accuracy to the methanol data and may be used to refine the fit parameters for the predicted spectrum of methanol-OD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electromagnetic form factors are the most fundamental observables that encode information about the internal structure of the nucleon. The electric ($G_{E}$) and the magnetic ($G_{M}$) form factors contain information about the spatial distribution of the charge and magnetization inside the nucleon. A significant discrepancy exists between the Rosenbluth and the polarization transfer measurements of the electromagnetic form factors of the proton. One possible explanation for the discrepancy is the contributions of two-photon exchange (TPE) effects. Theoretical calculations estimating the magnitude of the TPE effect are highly model dependent, and limited experimental evidence for such effects exists. Experimentally, the TPE effect can be measured by comparing the ratio of positron-proton elastic scattering cross section to that of the electron-proton $\large(R = \frac{\sigma (e^{+}p)}{\sigma (e^{-}p)}\large)$. The ratio $R$ was measured over a wide range of kinematics, utilizing a 5.6 GeV primary electron beam produced by the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab. This dissertation explored dependence of $R$ on kinematic variables such as squared four-momentum transfer ($Q^{2}$) and the virtual photon polarization parameter ($\varepsilon$). A mixed electron-positron beam was produced from the primary electron beam in experimental Hall B. The mixed beam was scattered from a liquid hydrogen (LH$_{2}$) target. Both the scattered lepton and the recoil proton were detected by the CEBAF Large Acceptance Spectrometer (CLAS). The elastic events were then identified by using elastic scattering kinematics. This work extracted the $Q^{2}$ dependence of $R$ at high $\varepsilon$ ($\varepsilon > $ 0.8) and the $\varepsilon$ dependence of $R$ at $\langle Q^{2} \rangle \approx 0.85$ GeV$^{2}$. In these kinematics, our data confirm the validity of the hadronic calculations of the TPE effect by Blunden, Melnitchouk, and Tjon. This hadronic TPE effect, with additional corrections contributed by higher excitations of the intermediate state nucleon, largely reconciles the Rosenbluth and the polarization transfer measurements of the electromagnetic form factors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Reablement, also known as restorative care, is one possible approach to home-care services for older adults at risk of functional decline. Unlike traditional home-care services, reablement is frequently time-limited (usually six to 12 weeks) and aims to maximise independence by offering an intensive multidisciplinary, person-centred and goal-directed intervention. Objectives: To assess the effects of time-limited home-care reablement services (up to 12 weeks) for maintaining and improving the functional independence of older adults (aged 65 years or more) when compared to usual home-care or wait-list control group. Search methods: We searched the following databases with no language restrictions during April to June 2015: the Cochrane Central Register of Controlled Trials (CENTRAL); MEDLINE (OvidSP); Embase (OvidSP); PsycINFO (OvidSP); ERIC; Sociological Abstracts; ProQuest Dissertations and Theses; CINAHL (EBSCOhost); SIGLE (OpenGrey); AgeLine and Social Care Online. We also searched the reference lists of relevant studies and reviews as well as contacting authors in the field. Selection criteria: We included randomised controlled trials (RCTs), cluster randomised or quasi-randomised trials of time-limited reablement services for older adults (aged 65 years or more) delivered in their home; and incorporated a usual home-care or wait-list control group. Data collection and analysis: Two authors independently assessed studies for inclusion, extracted data, assessed the risk of bias of individual studies and considered quality of the evidence using GRADE. We contacted study authors for additional information where needed. Main results: Two studies, comparing reablement with usual home-care services with 811 participants, met our eligibility criteria for inclusion; we also identified three potentially eligible studies, but findings were not yet available. One included study was conducted in Western Australia with 750 participants (mean age 82.29 years). The second study was conducted in Norway (61 participants; mean age 79 years). We are very uncertain as to the effects of reablement compared with usual care as the evidence was of very low quality for all of the outcomes reported. The main findings were as follows. Functional status: very low quality evidence suggested that reablement may be slightly more effective than usual care in improving function at nine to 12 months (lower scores reflect greater independence; standardised mean difference (SMD) -0.30; 95% confidence interval (CI) -0.53 to -0.06; 2 studies with 249 participants). Adverse events: reablement may make little or no difference to mortality at 12 months' follow-up (RR 0.97; 95% CI 0.74 to 1.29; 2 studies with 811 participants) or rates of unplanned hospital admission at 24 months (RR 0.94; 95% CI 0.85 to 1.03; 1 study with 750 participants). The very low quality evidence also means we are uncertain whether reablement may influence quality of life (SMD -0.23; 95% CI -0.48 to 0.02; 2 trials with 249 participants) or living arrangements (RR 0.92, 95% CI 0.62 to 1.34; 1 study with 750 participants) at time points up to 12 months. People receiving reablement may be slightly less likely to have been approved for a higher level of personal care than people receiving usual care over the 24 months' follow-up (RR 0.87; 95% CI 0.77 to 0.98; 1 trial, 750 participants). Similarly, although there may be a small reduction in total aggregated home and healthcare costs over the 24-month follow-up (reablement: AUD 19,888; usual care: AUD 22,757; 1 trial with 750 participants), we are uncertain about the size and importance of these effects as the results were based on very low quality evidence. Neither study reported user satisfaction with the service. Authors' conclusions: There is considerable uncertainty regarding the effects of reablement as the evidence was of very low quality according to our GRADE ratings. Therefore, the effectiveness of reablement services cannot be supported or refuted until more robust evidence becomes available. There is an urgent need for high quality trials across different health and social care systems due to the increasingly high profile of reablement services in policy and practice in several countries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We measure quality of service (QoS) in a wireless network architecture of transoceanic aircraft. A distinguishing characteristic of the network scheme we analyze is that it mixes the concept of Delay Tolerant Networking (DTN) through the exploitation of opportunistic contacts, together with direct satellite access in a limited number of the nodes. We provide a graph sparsification technique for deriving a network model that satisfies the key properties of a real aeronautical opportunistic network while enabling scalable simulation. This reduced model allows us to analyze the impact regarding QoS of introducing Internet-like traffic in the form of outgoing data from passengers. Promoting QoS in DTNs is usually really challenging due to their long delays and scarce resources. The availability of satellite communication links offers a chance to provide an improved degree of service regarding a pure opportunistic approach, and therefore it needs to be properly measured and quantified. Our analysis focuses on several QoS indicators such as delivery time, delivery ratio, and bandwidth allocation fairness. Obtained results show significant improvements in all metric indicators regarding QoS, not usually achievable on the field of DTNs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The consumption or scavenging of fish in the water column at depths from 75 to 275 m in Algarve (southern Portugal) trawl fishing grounds was evaluated. Longlines were used to suspend baits throughout the water column while electric fishing reels were used to simulate sinking discards. Eighteen species were caught, with higher catch rates near the surface than near the bottom. However, scavenging rates were generally highest near the bottom and lowest in the middle of the water column. At depths less than 100 m the majority or all the fish were scavenged throughout the water column, while at depths greater than 200 m most of the fish were untouched after periods of time greater than would be required for them to sink to the bottom. Since other studies have shown that most small fish discards are scavenged at the surface by sea birds and most of the discarded species that sink are either too large or not attractive to pelagic predators, these results suggest that mid-water scavenging of trawl discards in deep water is relatively unimportant.