860 resultados para Large-scale Structure
Resumo:
The seminal multiple-view stereo benchmark evaluations from Middlebury and by Strecha et al. have played a major role in propelling the development of multi-view stereopsis (MVS) methodology. The somewhat small size and variability of these data sets, however, limit their scope and the conclusions that can be derived from them. To facilitate further development within MVS, we here present a new and varied data set consisting of 80 scenes, seen from 49 or 64 accurate camera positions. This is accompanied by accurate structured light scans for reference and evaluation. In addition all images are taken under seven different lighting conditions. As a benchmark and to validate the use of our data set for obtaining reasonable and statistically significant findings about MVS, we have applied the three state-of-the-art MVS algorithms by Campbell et al., Furukawa et al., and Tola et al. to the data set. To do this we have extended the evaluation protocol from the Middlebury evaluation, necessitated by the more complex geometry of some of our scenes. The data set and accompanying evaluation framework are made freely available online. Based on this evaluation, we are able to observe several characteristics of state-of-the-art MVS, e.g. that there is a tradeoff between the quality of the reconstructed 3D points (accuracy) and how much of an object’s surface is captured (completeness). Also, several issues that we hypothesized would challenge MVS, such as specularities and changing lighting conditions did not pose serious problems. Our study finds that the two most pressing issues for MVS are lack of texture and meshing (forming 3D points into closed triangulated surfaces).
Resumo:
For the treatment and monitoring of Parkinson's disease (PD) to be scientific, a key requirement is that measurement of disease stages and severity is quantitative, reliable, and repeatable. The last 50 years in PD research have been dominated by qualitative, subjective ratings obtained by human interpretation of the presentation of disease signs and symptoms at clinical visits. More recently, “wearable,” sensor-based, quantitative, objective, and easy-to-use systems for quantifying PD signs for large numbers of participants over extended durations have been developed. This technology has the potential to significantly improve both clinical diagnosis and management in PD and the conduct of clinical studies. However, the large-scale, high-dimensional character of the data captured by these wearable sensors requires sophisticated signal processing and machine-learning algorithms to transform it into scientifically and clinically meaningful information. Such algorithms that “learn” from data have shown remarkable success in making accurate predictions for complex problems in which human skill has been required to date, but they are challenging to evaluate and apply without a basic understanding of the underlying logic on which they are based. This article contains a nontechnical tutorial review of relevant machine-learning algorithms, also describing their limitations and how these can be overcome. It discusses implications of this technology and a practical road map for realizing the full potential of this technology in PD research and practice. © 2016 International Parkinson and Movement Disorder Society.
Resumo:
The purpose of this investigation was to develop new techniques to generate segmental assessments of body composition based on Segmental Bioelectrical Impedance Analysis (SBIA). An equally important consideration was the design, simulation, development, and the software and hardware integration of the SBIA system. This integration was carried out with a Very Large Scale Integration (VLSI) Field Programmable Gate Array (FPGA) microcontroller that analyzed the measurements obtained from segments of the body, and provided full body and segmental Fat Free Mass (FFM) and Fat Mass (FM) percentages. Also, the issues related to the estimate of the body's composition in persons with spinal cord injury (SCI) were addressed and investigated. This investigation demonstrated that the SBIA methodology provided accurate segmental body composition measurements. Disabled individuals are expected to benefit from these SBIA evaluations, as they are non-invasive methods, suitable for paralyzed individuals. The SBIA VLSI system may replace bulky, non flexible electronic modules attached to human bodies. ^
Resumo:
Internet Protocol Television (IPTV) is a system where a digital television service is delivered by using Internet Protocol over a network infrastructure. There is considerable confusion and concern about the IPTV, since two different technologies have to be mended together to provide the end customers with some thing better than the conventional television. In this research, functional architecture of the IPTV system was investigated. Very Large Scale Integration based system for streaming server controller were designed and different ways of hosting a web server which can be used to send the control signals to the streaming server controller were studied. The web server accepts inputs from the keyboard and FPGA board switches and depending on the preset configuration the server will open a selected web page and also sends the control signals to the streaming server controller. It was observed that the applications run faster on PowerPC since it is embedded into the FPGA. Commercial market and Global deployment of IPTV were discussed.
Resumo:
As massive data sets become increasingly available, people are facing the problem of how to effectively process and understand these data. Traditional sequential computing models are giving way to parallel and distributed computing models, such as MapReduce, both due to the large size of the data sets and their high dimensionality. This dissertation, as in the same direction of other researches that are based on MapReduce, tries to develop effective techniques and applications using MapReduce that can help people solve large-scale problems. Three different problems are tackled in the dissertation. The first one deals with processing terabytes of raster data in a spatial data management system. Aerial imagery files are broken into tiles to enable data parallel computation. The second and third problems deal with dimension reduction techniques that can be used to handle data sets of high dimensionality. Three variants of the nonnegative matrix factorization technique are scaled up to factorize matrices of dimensions in the order of millions in MapReduce based on different matrix multiplication implementations. Two algorithms, which compute CANDECOMP/PARAFAC and Tucker tensor decompositions respectively, are parallelized in MapReduce based on carefully partitioning the data and arranging the computation to maximize data locality and parallelism.
Resumo:
We developed a conceptual ecological model (CEM) for invasive species to help understand the role invasive exotics have in ecosystem ecology and their impacts on restoration activities. Our model, which can be applied to any invasive species, grew from the eco-regional conceptual models developed for Everglades restoration. These models identify ecological drivers, stressors, effects and attributes; we integrated the unique aspects of exotic species invasions and effects into this conceptual hierarchy. We used the model to help identify important aspects of invasion in the development of an invasive exotic plant ecological indicator, which is described a companion paper in this special issue journal. A key aspect of the CEM is that it is a general ecological model that can be tailored to specific cases and species, as the details of any invasion are unique to that invasive species. Our model encompasses the temporal and spatial changes that characterize invasion, identifying the general conditions that allow a species to become invasive in a de novo environment; it then enumerates the possible effects exotic species may have collectively and individually at varying scales and for different ecosystem properties, once a species becomes invasive. The model provides suites of characteristics and processes, as well as hypothesized causal relationships to consider when thinking about the effects or potential effects of an invasive exotic and how restoration efforts will affect these characteristics and processes. In order to illustrate how to use the model as a blueprint for applying a similar approach to other invasive species and ecosystems, we give two examples of using this conceptual model to evaluate the status of two south Florida invasive exotic plant species (melaleuca and Old World climbing fern) and consider potential impacts of these invasive species on restoration.
Resumo:
With the exponential increasing demands and uses of GIS data visualization system, such as urban planning, environment and climate change monitoring, weather simulation, hydrographic gauge and so forth, the geospatial vector and raster data visualization research, application and technology has become prevalent. However, we observe that current web GIS techniques are merely suitable for static vector and raster data where no dynamic overlaying layers. While it is desirable to enable visual explorations of large-scale dynamic vector and raster geospatial data in a web environment, improving the performance between backend datasets and the vector and raster applications remains a challenging technical issue. This dissertation is to implement these challenging and unimplemented areas: how to provide a large-scale dynamic vector and raster data visualization service with dynamic overlaying layers accessible from various client devices through a standard web browser, and how to make the large-scale dynamic vector and raster data visualization service as rapid as the static one. To accomplish these, a large-scale dynamic vector and raster data visualization geographic information system based on parallel map tiling and a comprehensive performance improvement solution are proposed, designed and implemented. They include: the quadtree-based indexing and parallel map tiling, the Legend String, the vector data visualization with dynamic layers overlaying, the vector data time series visualization, the algorithm of vector data rendering, the algorithm of raster data re-projection, the algorithm for elimination of superfluous level of detail, the algorithm for vector data gridding and re-grouping and the cluster servers side vector and raster data caching.
Resumo:
Low-rise buildings are often subjected to high wind loads during hurricanes that lead to severe damage and cause water intrusion. It is therefore important to estimate accurate wind pressures for design purposes to reduce losses. Wind loads on low-rise buildings can differ significantly depending upon the laboratory in which they were measured. The differences are due in large part to inadequate simulations of the low-frequency content of atmospheric velocity fluctuations in the laboratory and to the small scale of the models used for the measurements. A new partial turbulence simulation methodology was developed for simulating the effect of low-frequency flow fluctuations on low-rise buildings more effectively from the point of view of testing accuracy and repeatability than is currently the case. The methodology was validated by comparing aerodynamic pressure data for building models obtained in the open-jet 12-Fan Wall of Wind (WOW) facility against their counterparts in a boundary-layer wind tunnel. Field measurements of pressures on Texas Tech University building and Silsoe building were also used for validation purposes. The tests in partial simulation are freed of integral length scale constraints, meaning that model length scales in such testing are only limited by blockage considerations. Thus the partial simulation methodology can be used to produce aerodynamic data for low-rise buildings by using large-scale models in wind tunnels and WOW-like facilities. This is a major advantage, because large-scale models allow for accurate modeling of architectural details, testing at higher Reynolds number, using greater spatial resolution of the pressure taps in high pressure zones, and assessing the performance of aerodynamic devices to reduce wind effects. The technique eliminates a major cause of discrepancies among measurements conducted in different laboratories and can help to standardize flow simulations for testing residential homes as well as significantly improving testing accuracy and repeatability. Partial turbulence simulation was used in the WOW to determine the performance of discontinuous perforated parapets in mitigating roof pressures. The comparisons of pressures with and without parapets showed significant reductions in pressure coefficients in the zones with high suctions. This demonstrated the potential of such aerodynamic add-on devices to reduce uplift forces.
Resumo:
Long-span bridges are flexible and therefore are sensitive to wind induced effects. One way to improve the stability of long span bridges against flutter is to use cross-sections that involve twin side-by-side decks. However, this can amplify responses due to vortex induced oscillations. Wind tunnel testing is a well-established practice to evaluate the stability of bridges against wind loads. In order to study the response of the prototype in laboratory, dynamic similarity requirements should be satisfied. One of the parameters that is normally violated in wind tunnel testing is Reynolds number. In this dissertation, the effects of Reynolds number on the aerodynamics of a double deck bridge were evaluated by measuring fluctuating forces on a motionless sectional model of a bridge at different wind speeds representing different Reynolds regimes. Also, the efficacy of vortex mitigation devices was evaluated at different Reynolds number regimes. One other parameter that is frequently ignored in wind tunnel studies is the correct simulation of turbulence characteristics. Due to the difficulties in simulating flow with large turbulence length scale on a sectional model, wind tunnel tests are often performed in smooth flow as a conservative approach. The validity of simplifying assumptions in calculation of buffeting loads, as the direct impact of turbulence, needs to be verified for twin deck bridges. The effects of turbulence characteristics were investigated by testing sectional models of a twin deck bridge under two different turbulent flow conditions. Not only the flow properties play an important role on the aerodynamic response of the bridge, but also the geometry of the cross section shape is expected to have significant effects. In this dissertation, the effects of deck details, such as width of the gap between the twin decks, and traffic barriers on the aerodynamic characteristics of a twin deck bridge were investigated, particularly on the vortex shedding forces with the aim of clarifying how these shape details can alter the wind induced responses. Finally, a summary of the issues that are involved in designing a dynamic test rig for high Reynolds number tests is given, using the studied cross section as an example.
Resumo:
The social media classification problems draw more and more attention in the past few years. With the rapid development of Internet and the popularity of computers, there is astronomical amount of information in the social network (social media platforms). The datasets are generally large scale and are often corrupted by noise. The presence of noise in training set has strong impact on the performance of supervised learning (classification) techniques. A budget-driven One-class SVM approach is presented in this thesis that is suitable for large scale social media data classification. Our approach is based on an existing online One-class SVM learning algorithm, referred as STOCS (Self-Tuning One-Class SVM) algorithm. To justify our choice, we first analyze the noise-resilient ability of STOCS using synthetic data. The experiments suggest that STOCS is more robust against label noise than several other existing approaches. Next, to handle big data classification problem for social media data, we introduce several budget driven features, which allow the algorithm to be trained within limited time and under limited memory requirement. Besides, the resulting algorithm can be easily adapted to changes in dynamic data with minimal computational cost. Compared with two state-of-the-art approaches, Lib-Linear and kNN, our approach is shown to be competitive with lower requirements of memory and time.
Resumo:
ACKNOWLEDGEMENTS This research is based upon work supported in part by the U.S. ARL and U.K. Ministry of Defense under Agreement Number W911NF-06-3-0001, and by the NSF under award CNS-1213140. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views or represent the official policies of the NSF, the U.S. ARL, the U.S. Government, the U.K. Ministry of Defense or the U.K. Government. The U.S. and U.K. Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.
Resumo:
Funded by The research presented in this paper is part of the SINBAD project. Grant Number: STW (12058) and EPSRC (EP/J00507X/1, EP/J005541/1)
Resumo:
ACKNOWLEDGEMENTS This research is based upon work supported in part by the U.S. ARL and U.K. Ministry of Defense under Agreement Number W911NF-06-3-0001, and by the NSF under award CNS-1213140. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views or represent the official policies of the NSF, the U.S. ARL, the U.S. Government, the U.K. Ministry of Defense or the U.K. Government. The U.S. and U.K. Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.
Resumo:
Thermodynamic stability measurements on proteins and protein-ligand complexes can offer insights not only into the fundamental properties of protein folding reactions and protein functions, but also into the development of protein-directed therapeutic agents to combat disease. Conventional calorimetric or spectroscopic approaches for measuring protein stability typically require large amounts of purified protein. This requirement has precluded their use in proteomic applications. Stability of Proteins from Rates of Oxidation (SPROX) is a recently developed mass spectrometry-based approach for proteome-wide thermodynamic stability analysis. Since the proteomic coverage of SPROX is fundamentally limited by the detection of methionine-containing peptides, the use of tryptophan-containing peptides was investigated in this dissertation. A new SPROX-like protocol was developed that measured protein folding free energies using the denaturant dependence of the rate at which globally protected tryptophan and methionine residues are modified with dimethyl (2-hydroxyl-5-nitrobenzyl) sulfonium bromide and hydrogen peroxide, respectively. This so-called Hybrid protocol was applied to proteins in yeast and MCF-7 cell lysates and achieved a ~50% increase in proteomic coverage compared to probing only methionine-containing peptides. Subsequently, the Hybrid protocol was successfully utilized to identify and quantify both known and novel protein-ligand interactions in cell lysates. The ligands under study included the well-known Hsp90 inhibitor geldanamycin and the less well-understood omeprazole sulfide that inhibits liver-stage malaria. In addition to protein-small molecule interactions, protein-protein interactions involving Puf6 were investigated using the SPROX technique in comparative thermodynamic analyses performed on wild-type and Puf6-deletion yeast strains. A total of 39 proteins were detected as Puf6 targets and 36 of these targets were previously unknown to interact with Puf6. Finally, to facilitate the SPROX/Hybrid data analysis process and minimize human errors, a Bayesian algorithm was developed for transition midpoint assignment. In summary, the work in this dissertation expanded the scope of SPROX and evaluated the use of SPROX/Hybrid protocols for characterizing protein-ligand interactions in complex biological mixtures.
Resumo:
The main goal of this work is to determine the true cost incurred by the Republic of Ireland and Northern Ireland in order to meet their EU renewable electricity targets. The primary all-island of Ireland policy goal is that 40% of electricity will come from renewable sources in 2020. From this it is expected that wind generation on the Irish electricity system will be in the region of 32-37% of total generation. This leads to issues resulting from wind energy being a non-synchronous, unpredictable and variable source of energy use on a scale never seen before for a single synchronous system. If changes are not made to traditional operational practices, the efficient running of the electricity system will be directly affected by these issues in the coming years. Using models of the electricity system for the all-island grid of Ireland, the effects of high wind energy penetration expected to be present in 2020 are examined. These models were developed using a unit commitment, economic dispatch tool called PLEXOS which allows for a detailed representation of the electricity system to be achieved down to individual generator level. These models replicate the true running of the electricity system through use of day-ahead scheduling and semi-relaxed use of these schedules that reflects the Transmission System Operator's of real time decision making on dispatch. In addition, it carefully considers other non-wind priority dispatch generation technologies that have an effect on the overall system. In the models developed, three main issues associated with wind energy integration were selected to be examined in detail to determine the sensitivity of assumptions presented in other studies. These three issues include wind energy's non-synchronous nature, its variability and spatial correlation, and its unpredictability. This leads to an examination of the effects in three areas: the need for system operation constraints required for system security; different onshore to offshore ratios of installed wind energy; and the degrees of accuracy in wind energy forecasting. Each of these areas directly impact the way in which the electricity system is run as they address each of the three issues associated with wind energy stated above, respectively. It is shown that assumptions in these three areas have a large effect on the results in terms of total generation costs, wind curtailment and generator technology type dispatch. In particular accounting for these issues has resulted in wind curtailment being predicted in much larger quantities than had been previously reported. This would have a large effect on wind energy companies because it is already a very low profit margin industry. Results from this work have shown that the relaxation of system operation constraints is crucial to the economic running of the electricity system with large improvements shown in the reduction of wind curtailment and system generation costs. There are clear benefits in having a proportion of the wind installed offshore in Ireland which would help to reduce variability of wind energy generation on the system and therefore reduce wind curtailment. With envisaged future improvements in day-ahead wind forecasting from 8% to 4% mean absolute error, there are potential reductions in wind curtailment system costs and open cycle gas turbine usage. This work illustrates the consequences of assumptions in the areas of system operation constraints, onshore/offshore installed wind capacities and accuracy in wind forecasting to better inform the true costs associated with running Ireland's changing electricity system as it continues to decarbonise into the near future. This work also proposes to illustrate, through the use of Ireland as a case study, the effects that will become ever more prevalent in other synchronous systems as they pursue a path of increasing renewable energy generation.