948 resultados para Angular acceleration
Resumo:
2000 Mathematics Subject Classification: 47H04, 65K10.
Resumo:
Дойчин Бояджиев, Галена Пеловска - В статията се предлага оптимизиран алгоритъм, който е по-бърз в сравнение с по- рано описаната ускорена (модифицирана STS) диференчна схема за възрастово структуриран популационен модел с дифузия. Запазвайки апроксимацията на модифицирания STS алгоритъм, изчислителното времето се намаля почти два пъти. Това прави оптимизирания метод по-предпочитан за задачи с нелинейност или с по-висока размерност.
Resumo:
This paper details work carried out to verify the dimensional measurement performance of the Indoor GPS (iGPS) system; a network of Rotary-Laser Automatic Theodolites (R-LATs). Initially tests were carried out to determine the angular uncertainties on an individual R-LAT transmitter-receiver pair. A method is presented of determining the uncertainty of dimensional measurement for a three dimensional coordinate measurement machine. An experimental procedure was developed to compare three dimensional coordinate measurements with calibrated reference points. The reference standard used to calibrate these reference points was a fringe counting interferometer with the multilateration technique employed to establish three dimensional coordinates. This is an extension of the established technique of comparing measured lengths with calibrated lengths. The method was found to be practical and able to establish that the expanded uncertainty of the basic iGPS system was approximately 1 mm at a 95% confidence level. Further tests carried out on a highly optimized version of the iGPS system have shown that the coordinate uncertainty can be reduced to 0.25 mm at a 95% confidence level.
Resumo:
The purpose of this research study was to determine if the Advanced Placement program as it is recognized by the universities in the Florida State University System (SUS) truly serves as an acceleration mechanism for those students who enter an SUS institution with passing AP scores. Despite mandates which attempt to control uniformity of policy, each public university in Florida determines which courses will be exempted and the number of credits they will grant for passing Advanced Placement courses.^ This is a descriptive study in which the AP policies of each of the SUS institutions were compared. Additionally, the college attendance and graduation data on members of a cohort of 593 Broward County high school graduates of the class of June, 1992 were compared. Approximately 28% of the cohort members entered university with passing Advanced Placement scores.^ The rate of early and on time graduation was significantly dependent on the Advanced Placement standing of the students in the cohort. Given the financial and human cost involved, it is recommended that all state universities bring their Advanced Placement policies into line with each other and implement a uniform Advanced Placement policy. It is also recommended that a follow-up study be conducted with a new cohort bound under the current 120 credit limitation for graduation. ^
Resumo:
For the first time, the Z0 boson angular distribution in the center-of-momentum frame is measured in proton-proton collisions at [special characters omitted] = 7 TeV at the CERN LHC. The data sample, recorded with the CMS detector, corresponds to an integrated luminosity of approximately 36 pb–1 . Events in which there is a Z0 and at least one jet, with a jet transverse momentum threshold of 20 GeV and absolute jet rapidity less than 2.4, are selected for the analysis. Only the Z0's muon decay channel is studied. Within experimental and theoretical uncertainties, the measured angular distribution is in agreement with next-to-leading order perturbative QCD predictions.
Resumo:
The study of the angular distribution of photon plus jet events in pp collisions at [special characters omitted] = 7 TeV with the Compact Muon Solenoid (CMS) detector is presented. The photon is restricted to the central region of the detector (:η: <1.4442) while the jet is allowed to be present in both central and forward regions of CMS (:η: < 2.4). Dominant backgrounds due to jets fragmenting into neutral mesons are accounted for through the use of a template method that discriminates between signal and background. The angular distribution, :η*:, is defined as the absolute value of the difference in η between the leading photon and leading jet in an event divided by two. The angular distribution ranging from 0–1.4 was examined and compared with next-to-leading order QCD predictions and was found to be in good agreement.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
There is observational evidence that global sea level is rising and there is concern that the rate of rise will increase, significantly threatening coastal communities. However, considerable debate remains as to whether the rate of sea level rise is currently increasing and, if so, by how much. Here we provide new insights into sea level accelerations by applying the main methods that have been used previously to search for accelerations in historical data, to identify the timings (with uncertainties) at which accelerations might first be recognized in a statistically significant manner (if not apparent already) in sea level records that we have artificially extended to 2100. We find that the most important approach to earliest possible detection of a significant sea level acceleration lies in improved understanding (and subsequent removal) of interannual to multidecadal variability in sea level records.
Resumo:
The aim of this thesis is to study the angular momentum of a sample of S0 galaxies. In the quest to understand whether the formation of S0 galaxies is more closely linked to that of ellipticals or that of spirals, our goal is to compare the amount of their specific angular momentum as a function of stellar mass with respect to spirals. Through kinematic comparison between these different classes of galaxies we aim to understand if a scenario of passive evolution, in which the galaxy’s gas is consumed and the star formation is quenched, can be considered as plausible mechanism to explain the transformation from spirals to S0s. In order to derive the structural and photometric parameters of galaxy sub-components we performed a bulge-disc decomposition of optical images using GALFIT. The stellar kinematic of the galaxies was measured using integral field spectroscopic data from CALIFA survey. The development of new original software, based on a Monte Carlo Markov Chain algorithm, allowed us to obtain the values of the line of sight velocity and velocity dispersion of disc and bulge components. The result that we obtained is that S0 discs have a distribution of stellar specific angular momentum that is in full agreement with that of spiral discs, so the mechanism of simple fading can be considered as one of the most important for transformation from spirals to S0s.
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.
Resumo:
That we live in a time of unprecedented and ever-increasing change is both a shibboleth of our age and the more-or-less explicit justification for all manner of “strategic” actions. The seldom, if ever, questioned assumption is that our now is more ephemeral, more evanescent, than any that preceded it. In this essay, we subject this assumption to some critical scrutiny, utilizing a range of empirical detail. In the face of this assay we find the assumption to be considerably wanting. We suggest that what we are actually witnessing is mere acceleration, which we distinguish as intensification along a preexisting trajectory, parading as more substantive and radical movement away from a preexisting trajectory. Deploying Deleuze's (2004) terms we are, we suggest, in thrall to representation of the same at the expense of repetition of difference. Our consumption by acceleration, we argue, both occludes the lack of substantive change actually occurring while simultaneously delimiting possibilities of thinking of and enacting the truly radical. We also consider how this setup is maintained, thus attempting to shed some light on why we are seemingly running to stand still. As the Red Queen said, “it's necessary to run faster even to stay in the one place.”
Resumo:
With the emerging prevalence of smart phones and 4G LTE networks, the demand for faster-better-cheaper mobile services anytime and anywhere is ever growing. The Dynamic Network Optimization (DNO) concept emerged as a solution that optimally and continuously tunes the network settings, in response to varying network conditions and subscriber needs. Yet, the DNO realization is still at infancy, largely hindered by the bottleneck of the lengthy optimization runtime. This paper presents the design and prototype of a novel cloud based parallel solution that further enhances the scalability of our prior work on various parallel solutions that accelerate network optimization algorithms. The solution aims to satisfy the high performance required by DNO, preliminarily on a sub-hourly basis. The paper subsequently visualizes a design and a full cycle of a DNO system. A set of potential solutions to large network and real-time DNO are also proposed. Overall, this work creates a breakthrough towards the realization of DNO.
Resumo:
Multiple ion acceleration mechanisms can occur when an ultrathin foil is irradiated with an intense laser pulse, with the dominant mechanism changing over the course of the interaction. Measurement of the spatial-intensity distribution of the beam of energetic protons is used to investigate the transition from radiation pressure acceleration to transparency-driven processes. It is shown numerically that radiation pressure drives an increased expansion of the target ions within the spatial extent of the laser focal spot, which induces a radial deflection of relatively low energy sheath-accelerated protons to form an annular distribution. Through variation of the target foil thickness, the opening angle of the ring is shown to be correlated to the point in time transparency occurs during the interaction and is maximized when it occurs at the peak of the laser intensity profile. Corresponding experimental measurements of the ring size variation with target thickness exhibit the same trends and provide insight into the intra-pulse laser-plasma evolution.
Resumo:
Control of the collective response of plasma particles to intense laser light is intrinsic to relativistic optics, the development of compact laser-driven particle and radiation sources, as well as investigations of some laboratory astrophysics phenomena. We recently demonstrated that a relativistic plasma aperture produced in an ultra-thin foil at the focus of intense laser radiation can induce diffraction, enabling polarization-based control of the collective motion of plasma electrons. Here we show that under these conditions the electron dynamics are mapped into the beam of protons accelerated via strong charge-separation-induced electrostatic fields. It is demonstrated experimentally and numerically via 3D particle-in-cell simulations that the degree of ellipticity of the laser polarization strongly influences the spatial-intensity distribution of the beam of multi-MeV protons. The influence on both sheath-accelerated and radiation pressure-accelerated protons is investigated. This approach opens up a potential new route to control laser-driven ion sources.
Resumo:
A large eddy simulation is performed to study the deflagration to detonation transition phenomenon in an obstructed channel containing premixed stoichiometric hydrogen–air mixture. Two-dimensional filtered reactive Navier–Stokes equations are solved utilizing the artificially thickened flame approach (ATF) for modeling sub-grid scale combustion. To include the effect of induction time, a 27-step detailed mechanism is utilized along with an in situ adaptive tabulation (ISAT) method to reduce the computational cost due to the detailed chemistry. The results show that in the slow flame propagation regime, the flame–vortex interaction and the resulting flame folding and wrinkling are the main mechanisms for the increase of the flame surface and consequently acceleration of the flame. Furthermore, at high speed, the major mechanisms responsible for flame propagation are repeated reflected shock–flame interactions and the resulting baroclinic vorticity. These interactions intensify the rate of heat release and maintain the turbulence and flame speed at high level. During the flame acceleration, it is seen that the turbulent flame enters the ‘thickened reaction zones’ regime. Therefore, it is necessary to utilize the chemistry based combustion model with detailed chemical kinetics to properly capture the salient features of the fast deflagration propagation.