974 resultados para Computing cost
Resumo:
Construction is an information intensive industry in which the accuracy and timeliness of information is paramount. It observed that the main communication issue in construction is to provide a method to exchange data between the site operation, the site office and the head office. The information needs under consideration are time critical to assist in maintaining or improving the efficiency at the jobsite. Without appropriate computing support this may increase the difficulty of problem solving. Many researchers focus their research on the usage of mobile computing devices in the construction industry and they believe that mobile computers have the potential to solve some construction problems that leads to reduce overall productivity. However, to date very limited observation has been conducted in terms of the deployment of mobile computers for construction workers on-site. By providing field workers with accurate, reliable and timely information at the location where it is needed, it will support the effectiveness and efficiency at the job site. Bringing a new technology into construction industry is not only need a better understanding of the application, but also need a proper preparation of the allocation of the resources such as people, and investment. With this in mind, an accurate analysis is needed to provide clearly idea of the overall costs and benefits of the new technology. A cost benefit analysis is a method of evaluating the relative merits of a proposed investment project in order to achieve efficient allocation of resources. It is a way of identifying, portraying and assessing the factors which need to be considered in making rational economic choices. In principle, a cost benefit analysis is a rigorous, quantitative and data-intensive procedure, which requires identification all potential effects, categorisation of these effects as costs and benefits, quantitative estimation of the extent of each cost and benefit associated with an action, translation of these into a common metric such as dollars, discounting of future costs and benefits into the terms of a given year, and summary of all cost and benefit to see which is greater. Even though many cost benefit analysis methodologies are available for a general assessment, there is no specific methodology can be applied for analysing the cost and benefit of the application of mobile computing devices in the construction site. Hence, the proposed methodology in this document is predominantly adapted from Baker et al. (2000), Department of Finance (1995), and Office of Investment Management (2005). The methodology is divided into four main stages and then detailed into ten steps. The methodology is provided for the CRC CI 2002-057-C Project: Enabling Team Collaboration with Pervasive and Mobile Computing and can be seen in detail in Section 3.
Resumo:
In exploration geophysics,velocity analysis and migration methods except reverse time migration are based on ray theory or one-way wave-equation. So multiples are regarded as noise and required to be attenuated. It is very important to attenuate multiples for structure imaging, amplitude preserving migration. So it is an interesting research in theory and application about how to predict and attenuate internal multiples effectively. There are two methods based on wave-equation to predict internal multiples for pre-stack data. One is common focus point method. Another is inverse scattering series method. After comparison of the two methods, we found that there are four problems in common focus point method: 1. dependence of velocity model; 2. only internal multiples related to a layer can be predicted every time; 3. computing procedure is complex; 4. it is difficult to apply it in complex media. In order to overcome these problems, we adopt inverse scattering series method. However, inverse scattering series method also has some problems: 1. computing cost is high; 2. it is difficult to predict internal multiples in the far offset; 3. it is not able to predict internal multiples in complex media. Among those problems, high computing cost is the biggest barrier in field seismic processing. So I present 1D and 1.5D improved algorithms for reducing computing time. In addition, I proposed a new algorithm to solve the problem which exists in subtraction, especially for surface related to multiples. The creative results of my research are following: 1. derived an improved inverse scattering series prediction algorithm for 1D. The algorithm has very high computing efficiency. It is faster than old algorithm about twelve times in theory and faster about eighty times for lower spatial complexity in practice; 2. derived an improved inverse scattering series prediction algorithm for 1.5D. The new algorithm changes the computing domain from pseudo-depth wavenumber domain to TX domain for predicting multiples. The improved algorithm demonstrated that the approach has some merits such as higher computing efficiency, feasibility to many kinds of geometries, lower predictive noise and independence to wavelet; 3. proposed a new subtraction algorithm. The new subtraction algorithm is not used to overcome nonorthogonality, but utilize the nonorthogonality's distribution in TX domain to estimate the true wavelet with filtering method. The method has excellent effectiveness in model testing. Improved 1D and 1.5D inverse scattering series algorithms can predict internal multiples. After filtering and subtracting among seismic traces in a window time, internal multiples can be attenuated in some degree. The proposed 1D and 1.5D algorithms have demonstrated that they are effective to the numerical and field data. In addition, the new subtraction algorithm is effective to the complex theoretic models.
Resumo:
Oil and scientific groups have been focusing on the 3D wave equation prestack depth migration since it can solve the complex problems of the geologic structure accurately and maintain the wave information, which is propitious to lithology imaging. The symplectic method was brought up by Feng Kang firstly in 1984 and became the hotspot of numerical computation study. It will be widely applied in many scientific field of necessity because of its great virtue in scientific sense. This paper combines the Symplectic method and the 3-D wave equation prestack depth migration to bring up an effectual numerical computation method of wave field extrapolatation technique under the scientific background mentioned above. At the base of deep analysis of computation method and the performance of PC cluster, a seismic prestack depth migration flow considering the virtue of both seismic migration method and Pc cluster has formatted. The software, named 3D Wave Equation Prestack Depth Migration of Symplectic Method, which is based on the flow, has been enrolled in the National Bureau of Copyright (No. 0013767). Dagang and Daqing Oil Field have now put it into use in the field data processing. In this paper, the one way wave equation operator is decompounded into a phase shift operator and a time shift operator and the correct item with high rank Symplectic method when approaching E exponent. After reviewing eliminating alias frequency of operator, computing the maximum angle of migration and the imaging condition, we present the test result of impulse response of the Symplectic method. Taking the imaging results of the SEG/EAGE salt and overthrust models for example and seeing about the imaging ability with complex geologic structure of our software system, the paper has discussed the effect of the selection of imaging parameters and the effectuation on the migration result of the seismic wavelet and compared the 2-D and 3-D prestack depth migration result of the salt mode. We also present the test result of impulse response with the overthrust model. The imaging result of the two international models indicates that the Symplectic method of 3-D prestack depth migration accommodates great transversal velocity variation and complex geologic structure. The huge computing cost is the key obstruction that 3-D prestack depth migration wave equation cannot be adopted by oil industry. After deep analysis of prestack depth migration flow and the character of PC cluster ,the paper put forward :i)parallel algorithms in shot and frequency domain of the common shot gather 3-D wave equation prestack migration; ii)the optimized setting scheme of breakpoint in field data processing; iii)dynamic and static load balance among the nodes of the PC cluster in the 3-D prestack depth migration. It has been proven that computation periods of the 3-D prestack depth migration imaging are greatly shortened given that adopting the computing method mentioned in the paper. In addition,considering the 3-D wave equation prestack depth migration flow in complex medium and examples of the field data processing, the paper put the emphasis on: i)seismic data relative preprocessing, ii) 2.5D prestack depth migration velocity analysis, iii)3D prestack depth migration. The result of field data processing shows satisfied application ability of the flow put forward in the paper.
Resumo:
Ocean biogeochemistry (OBGC) models span a wide variety of complexities, including highly simplified nutrient-restoring schemes, nutrient–phytoplankton–zooplankton–detritus (NPZD) models that crudely represent the marine biota, models that represent a broader trophic structure by grouping organisms as plankton functional types (PFTs) based on their biogeochemical role (dynamic green ocean models) and ecosystem models that group organisms by ecological function and trait. OBGC models are now integral components of Earth system models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here we present an intercomparison of six OBGC models that were candidates for implementation within the next UK Earth system model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the ocean general circulation model Nucleus for European Modelling of the Ocean (NEMO) and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform all other models across all metrics. Nonetheless, the simpler models are broadly closer to observations across a number of fields and thus offer a high-efficiency option for ESMs that prioritise high-resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low-resolution climate dynamics and high-complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry–climate interactions.
Resumo:
This paper presents the generation of optimal trajectories by genetic algorithms (GA) for a planar robotic manipulator. The implemented GA considers a multi-objective function that minimizes the end-effector positioning error together with the joints angular displacement and it solves the inverse kinematics problem for the trajectory. Computer simulations results are presented to illustrate this implementation and show the efficiency of the used methodology producing soft trajectories with low computing cost. © 2011 Springer-Verlag Berlin Heidelberg.
Resumo:
This cross-sectional study was undertaken to evaluate the impact in terms of HIV/STD knowledge and sexual behavior that the City of Houston HIV/STD prevention program in HISD high schools has had on students who have participated in it by comparing them with their peers who have not, based on self reports. The study further evaluated the program cost-effectiveness for averting future HIV infections by computing Cost-Utility Ratios based on reported sexual behavior. ^ Mixed results were obtained, indicating a statistically significant difference in knowledge with the intervention group having scored higher (p-value 0.001) but not for any of the behaviors assessed. The knowledge score outcome's overall p-value after adjusting for each stratifying variable (age, grade, gender and ethnicity) was statistically significant. The Odds Ratio of intervention group participants aged 15 years or more scoring 70% or higher was 1.86 times; that of intervention group female participants was 2.29 times; and that of intervention group Black/African American participants was 2.47 times relative to their comparison group counterparts. The knowledge score results remained statistically significant in the logistic regression model, which controlled for age, grade level, gender and ethnicity. The Odds Ratio in this case was 1.74. ^ Three scenarios based on the difference in the risk of HIV infection between the intervention and comparison group were used for computation of Cost-Utility Ratios: Base, worst and best-case scenario. The best-case scenario yielded cost-effective results for male participants and cost-saving results for female participants when using ethnicity-adjusted HIV prevalence. The scenario remained cost-effective for female participants when using the unadjusted HIV prevalence. ^ The challenge to the program is to devise approaches that can enhance benefits for male participants. If it is a threshold problem implying that male participants require more intensive programs for behavioral change, then programs should first be piloted among boys before being implemented across the board. If it is a reflection of gender differences, then we might have to go back to the drawing board and engage boys in focus group discussions that will help formulate more effective programs. Gender-blind approaches currently in vogue do not seem to be working. ^
Resumo:
La mejora de la calidad del aire es una tarea eminentemente interdisciplinaria. Dada la gran variedad de ciencias y partes involucradas, dicha mejora requiere de herramientas de evaluación simples y completamente integradas. La modelización para la evaluación integrada (integrated assessment modeling) ha demostrado ser una solución adecuada para la descripción de los sistemas de contaminación atmosférica puesto que considera cada una de las etapas involucradas: emisiones, química y dispersión atmosférica, impactos ambientales asociados y potencial de disminución. Varios modelos de evaluación integrada ya están disponibles a escala continental, cubriendo cada una de las etapas antesmencionadas, siendo el modelo GAINS (Greenhouse Gas and Air Pollution Interactions and Synergies) el más reconocido y usado en el contexto europeo de toma de decisiones medioambientales. Sin embargo, el manejo de la calidad del aire a escala nacional/regional dentro del marco de la evaluación integrada es deseable. Esto sin embargo, no se lleva a cabo de manera satisfactoria con modelos a escala europea debido a la falta de resolución espacial o de detalle en los datos auxiliares, principalmente los inventarios de emisión y los patrones meteorológicos, entre otros. El objetivo de esta tesis es presentar los desarrollos en el diseño y aplicación de un modelo de evaluación integrada especialmente concebido para España y Portugal. El modelo AERIS (Atmospheric Evaluation and Research Integrated system for Spain) es capaz de cuantificar perfiles de concentración para varios contaminantes (NO2, SO2, PM10, PM2,5, NH3 y O3), el depósito atmosférico de especies de azufre y nitrógeno así como sus impactos en cultivos, vegetación, ecosistemas y salud como respuesta a cambios porcentuales en las emisiones de sectores relevantes. La versión actual de AERIS considera 20 sectores de emisión, ya sea equivalentes a sectores individuales SNAP o macrosectores, cuya contribución a los niveles de calidad del aire, depósito e impactos han sido modelados a través de matrices fuentereceptor (SRMs). Estas matrices son constantes de proporcionalidad que relacionan cambios en emisiones con diferentes indicadores de calidad del aire y han sido obtenidas a través de parametrizaciones estadísticas de un modelo de calidad del aire (AQM). Para el caso concreto de AERIS, su modelo de calidad del aire “de origen” consistió en el modelo WRF para la meteorología y en el modelo CMAQ para los procesos químico-atmosféricos. La cuantificación del depósito atmosférico, de los impactos en ecosistemas, cultivos, vegetación y salud humana se ha realizado siguiendo las metodologías estándar establecidas bajo los marcos internacionales de negociación, tales como CLRTAP. La estructura de programación está basada en MATLAB®, permitiendo gran compatibilidad con software típico de escritorio comoMicrosoft Excel® o ArcGIS®. En relación con los niveles de calidad del aire, AERIS es capaz de proveer datos de media anual y media mensual, así como el 19o valor horario más alto paraNO2, el 25o valor horario y el 4o valor diario más altos para SO2, el 36o valor diario más alto para PM10, el 26o valor octohorario más alto, SOMO35 y AOT40 para O3. En relación al depósito atmosférico, el depósito acumulado anual por unidad de area de especies de nitrógeno oxidado y reducido al igual que de azufre pueden ser determinados. Cuando los valores anteriormente mencionados se relacionan con características del dominio modelado tales como uso de suelo, cubiertas vegetales y forestales, censos poblacionales o estudios epidemiológicos, un gran número de impactos puede ser calculado. Centrándose en los impactos a ecosistemas y suelos, AERIS es capaz de estimar las superaciones de cargas críticas y las superaciones medias acumuladas para especies de nitrógeno y azufre. Los daños a bosques se calculan como una superación de los niveles críticos de NO2 y SO2 establecidos. Además, AERIS es capaz de cuantificar daños causados por O3 y SO2 en vid, maíz, patata, arroz, girasol, tabaco, tomate, sandía y trigo. Los impactos en salud humana han sido modelados como consecuencia de la exposición a PM2,5 y O3 y cuantificados como pérdidas en la esperanza de vida estadística e indicadores de mortalidad prematura. La exactitud del modelo de evaluación integrada ha sido contrastada estadísticamente con los resultados obtenidos por el modelo de calidad del aire convencional, exhibiendo en la mayoría de los casos un buen nivel de correspondencia. Debido a que la cuantificación de los impactos no es llevada a cabo directamente por el modelo de calidad del aire, un análisis de credibilidad ha sido realizado mediante la comparación de los resultados de AERIS con los de GAINS para un escenario de emisiones determinado. El análisis reveló un buen nivel de correspondencia en las medias y en las distribuciones probabilísticas de los conjuntos de datos. Las pruebas de verificación que fueron aplicadas a AERIS sugieren que los resultados son suficientemente consistentes para ser considerados como razonables y realistas. En conclusión, la principal motivación para la creación del modelo fue el producir una herramienta confiable y a la vez simple para el soporte de las partes involucradas en la toma de decisiones, de cara a analizar diferentes escenarios “y si” con un bajo coste computacional. La interacción con políticos y otros actores dictó encontrar un compromiso entre la complejidad del modeladomedioambiental con el carácter conciso de las políticas, siendo esto algo que AERIS refleja en sus estructuras conceptual y computacional. Finalmente, cabe decir que AERIS ha sido creado para su uso exclusivo dentro de un marco de evaluación y de ninguna manera debe ser considerado como un sustituto de los modelos de calidad del aire ordinarios. ABSTRACT Improving air quality is an eminently inter-disciplinary task. The wide variety of sciences and stakeholders that are involved call for having simple yet fully-integrated and reliable evaluation tools available. Integrated AssessmentModeling has proved to be a suitable solution for the description of air pollution systems due to the fact that it considers each of the involved stages: emissions, atmospheric chemistry, dispersion, environmental impacts and abatement potentials. Some integrated assessment models are available at European scale that cover each of the before mentioned stages, being the Greenhouse Gas and Air Pollution Interactions and Synergies (GAINS) model the most recognized and widely-used within a European policy-making context. However, addressing air quality at the national/regional scale under an integrated assessment framework is desirable. To do so, European-scale models do not provide enough spatial resolution or detail in their ancillary data sources, mainly emission inventories and local meteorology patterns as well as associated results. The objective of this dissertation is to present the developments in the design and application of an Integrated Assessment Model especially conceived for Spain and Portugal. The Atmospheric Evaluation and Research Integrated system for Spain (AERIS) is able to quantify concentration profiles for several pollutants (NO2, SO2, PM10, PM2.5, NH3 and O3), the atmospheric deposition of sulfur and nitrogen species and their related impacts on crops, vegetation, ecosystems and health as a response to percentual changes in the emissions of relevant sectors. The current version of AERIS considers 20 emission sectors, either corresponding to individual SNAP sectors or macrosectors, whose contribution to air quality levels, deposition and impacts have been modeled through the use of source-receptor matrices (SRMs). Thesematrices are proportionality constants that relate emission changes with different air quality indicators and have been derived through statistical parameterizations of an air qualitymodeling system (AQM). For the concrete case of AERIS, its parent AQM relied on the WRF model for meteorology and on the CMAQ model for atmospheric chemical processes. The quantification of atmospheric deposition, impacts on ecosystems, crops, vegetation and human health has been carried out following the standard methodologies established under international negotiation frameworks such as CLRTAP. The programming structure isMATLAB ® -based, allowing great compatibility with typical software such as Microsoft Excel ® or ArcGIS ® Regarding air quality levels, AERIS is able to provide mean annual andmean monthly concentration values, as well as the indicators established in Directive 2008/50/EC, namely the 19th highest hourly value for NO2, the 25th highest daily value and the 4th highest hourly value for SO2, the 36th highest daily value of PM10, the 26th highest maximum 8-hour daily value, SOMO35 and AOT40 for O3. Regarding atmospheric deposition, the annual accumulated deposition per unit of area of species of oxidized and reduced nitrogen as well as sulfur can be estimated. When relating the before mentioned values with specific characteristics of the modeling domain such as land use, forest and crops covers, population counts and epidemiological studies, a wide array of impacts can be calculated. When focusing on impacts on ecosystems and soils, AERIS is able to estimate critical load exceedances and accumulated average exceedances for nitrogen and sulfur species. Damage on forests is estimated as an exceedance of established critical levels of NO2 and SO2. Additionally, AERIS is able to quantify damage caused by O3 and SO2 on grapes, maize, potato, rice, sunflower, tobacco, tomato, watermelon and wheat. Impacts on human health aremodeled as a consequence of exposure to PM2.5 and O3 and quantified as losses in statistical life expectancy and premature mortality indicators. The accuracy of the IAM has been tested by statistically contrasting the obtained results with those yielded by the conventional AQM, exhibiting in most cases a good agreement level. Due to the fact that impacts cannot be directly produced by the AQM, a credibility analysis was carried out for the outputs of AERIS for a given emission scenario by comparing them through probability tests against the performance of GAINS for the same scenario. This analysis revealed a good correspondence in the mean behavior and the probabilistic distributions of the datasets. The verification tests that were applied to AERIS suggest that results are consistent enough to be credited as reasonable and realistic. In conclusion, the main reason thatmotivated the creation of this model was to produce a reliable yet simple screening tool that would provide decision and policy making support for different “what-if” scenarios at a low computing cost. The interaction with politicians and other stakeholders dictated that reconciling the complexity of modeling with the conciseness of policies should be reflected by AERIS in both, its conceptual and computational structures. It should be noted however, that AERIS has been created under a policy-driven framework and by no means should be considered as a substitute of the ordinary AQM.
Resumo:
Based on the possibility of real-time interaction with three-dimensional environments through an advanced interface, Virtual Reality consist in the main technology of this work, used in the design of virtual environments based on real Hydroelectric Plants. Previous to the process of deploying a Virtual Reality System for operation, three-dimensional modeling and interactive scenes settings are very importante steps. However, due to its magnitude and complexity, power plants virtual environments generation, currently, presents high computing cost. This work aims to present a methodology to optimize the production process of virtual environments associated with real hydroelectric power plants. In partnership with electric utility CEMIG, several HPPs were used in the scope of this work. During the modeling of each one of them, the techiniques within the methodologie were addressed. After the evaluation of the computional techniques presented here, it was possible to confirm a reduction in the time required to deliver each hydroelectrical complex. Thus, this work presents the current scenario about development of virtual hydroelectric power plants and discusses the proposed methodology that seeks to optimize this process in the electricity generation sector.
Resumo:
With Tweet volumes reaching 500 million a day, sampling is inevitable for any application using Twitter data. Realizing this, data providers such as Twitter, Gnip and Boardreader license sampled data streams priced in accordance with the sample size. Big Data applications working with sampled data would be interested in working with a large enough sample that is representative of the universal dataset. Previous work focusing on the representativeness issue has considered ensuring the global occurrence rates of key terms, be reliably estimated from the sample. Present technology allows sample size estimation in accordance with probabilistic bounds on occurrence rates for the case of uniform random sampling. In this paper, we consider the problem of further improving sample size estimates by leveraging stratification in Twitter data. We analyze our estimates through an extensive study using simulations and real-world data, establishing the superiority of our method over uniform random sampling. Our work provides the technical know-how for data providers to expand their portfolio to include stratified sampled datasets, whereas applications are benefited by being able to monitor more topics/events at the same data and computing cost.
Resumo:
This paper uses transaction cost theory to study cloud computing adoption. A model is developed and tested with data from an Australian survey. According to the results, perceived vendor opportunism and perceived legislative uncertainty around cloud computing were significantly associated with perceived cloud computing security risk. There was also a significant negative relationship between perceived cloud computing security risk and the intention to adopt cloud services. This study also reports on adoption rates of cloud computing in terms of applications, as well as the types of services used.
Resumo:
In this paper we consider bilinear forms of matrix polynomials and show that these polynomials can be used to construct solutions for the problems of solving systems of linear algebraic equations, matrix inversion and finding extremal eigenvalues. An almost Optimal Monte Carlo (MAO) algorithm for computing bilinear forms of matrix polynomials is presented. Results for the computational costs of a balanced algorithm for computing the bilinear form of a matrix power is presented, i.e., an algorithm for which probability and systematic errors are of the same order, and this is compared with the computational cost for a corresponding deterministic method.
Resumo:
In this project, we propose the implementation of a 3D object recognition system which will be optimized to operate under demanding time constraints. The system must be robust so that objects can be recognized properly in poor light conditions and cluttered scenes with significant levels of occlusion. An important requirement must be met: the system must exhibit a reasonable performance running on a low power consumption mobile GPU computing platform (NVIDIA Jetson TK1) so that it can be integrated in mobile robotics systems, ambient intelligence or ambient assisted living applications. The acquisition system is based on the use of color and depth (RGB-D) data streams provided by low-cost 3D sensors like Microsoft Kinect or PrimeSense Carmine. The range of algorithms and applications to be implemented and integrated will be quite broad, ranging from the acquisition, outlier removal or filtering of the input data and the segmentation or characterization of regions of interest in the scene to the very object recognition and pose estimation. Furthermore, in order to validate the proposed system, we will create a 3D object dataset. It will be composed by a set of 3D models, reconstructed from common household objects, as well as a handful of test scenes in which those objects appear. The scenes will be characterized by different levels of occlusion, diverse distances from the elements to the sensor and variations on the pose of the target objects. The creation of this dataset implies the additional development of 3D data acquisition and 3D object reconstruction applications. The resulting system has many possible applications, ranging from mobile robot navigation and semantic scene labeling to human-computer interaction (HCI) systems based on visual information.