952 resultados para Dynamic data set visualization


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Desde a década de 1980 diversos autores apresentaram correlações entre provas de carga estática e ensaios de carregamento dinâmico em estacas. Para uma boa correlação é fundamental que os testes sejam bem executados e que atinjam a ruptura segundo algum critério, como o de Davisson, por exemplo, além de levar em conta o intervalo de tempo entre a execução da prova de carga estática e do ensaio dinâmico, face ao efeito \"set up\". Após a realização do ensaio dinâmico realiza-se a análise CAPWAP que permite a determinação da distribuição do atrito lateral em profundidade, a carga de ponta e outros parâmetros dos solos tais como quakes e damping. A análise CAPWAP é realizada por tentativas através do procedimento \"signal matching\", isto é, o melhor ajuste entre os sinais de força medido pelos sensores e o calculado. É relativamente fácil mostrar que a mesma solução pode ser obtida através de dados de entrada diferentes. Isso significa que apesar de apresentarem cargas mobilizadas próximas o formato da curva da simulação de prova de carga estática, obtida pelo CAPWAP, assim como a distribuição do atrito lateral, podem ser diferentes, mesmo que as análises apresentem \"match quality\" (MQWU) satisfatórios. Uma forma de corrigir o formato da curva simulada do CAPWAP, assim como a distribuição do atrito lateral, é através da comparação com provas de carga estática (PCE). A sobreposição das duas curvas, a simulada e a \"real\", permite a determinação do quake do fuste através do trecho inicial da curva carga-recalque da prova de carga estática, que por sua vez permite uma melhor definição da distribuição do atrito lateral e da reação de ponta. Neste contexto surge o conceito de \"match quality de recalques\" (MQR). Quando a PCE não está disponível, propõe-se efetuar um carregamento estático utilizando o peso próprio do martelo do bate-estaca (CEPM). Mostra-se, através de dois casos de obra, em que estavam disponíveis ensaios de carregamento dinâmico e PCEs, que esse procedimento permite obter uma melhor solução do ponto de vista físico, isto é consistente com as características do subsolo e com a curva carga-recalque da PCE, e não apenas matemático, através da avaliação do \"match quality\" (MQWU).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well known that that there is an intrinsic link between the financial and energy sectors, which can be analyzed through their spillover effects, which are measures of how the shocks to returns in different assets affect each other’s subsequent volatility in both spot and futures markets. Financial derivatives, which are not only highly representative of the underlying indices but can also be traded on both the spot and futures markets, include Exchange Traded Funds (ETFs), which is a tradable spot index whose aim is to replicate the return of an underlying benchmark index. When ETF futures are not available to examine spillover effects, “generated regressors” may be used to construct both Financial ETF futures and Energy ETF futures. The purpose of the paper is to investigate the covolatility spillovers within and across the US energy and financial sectors in both spot and futures markets, by using “generated regressors” and a multivariate conditional volatility model, namely Diagonal BEKK. The daily data used are from 1998/12/23 to 2016/4/22. The data set is analyzed in its entirety, and also subdivided into three subset time periods. The empirical results show there is a significant relationship between the Financial ETF and Energy ETF in the spot and futures markets. Therefore, financial and energy ETFs are suitable for constructing a financial portfolio from an optimal risk management perspective, and also for dynamic hedging purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thermodynamics Conference 2013 (Statistical Mechanics and Thermodynamics Group of the Royal Society of Chemistry), The University of Manchester, 3-6 September 2013.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

LIDAR (LIght Detection And Ranging) first return elevation data of the Boston, Massachusetts region from MassGIS at 1-meter resolution. This LIDAR data was captured in Spring 2002. LIDAR first return data (which shows the highest ground features, e.g. tree canopy, buildings etc.) can be used to produce a digital terrain model of the Earth's surface. This dataset consists of 74 First Return DEM tiles. The tiles are 4km by 4km areas corresponding with the MassGIS orthoimage index. This data set was collected using 3Di's Digital Airborne Topographic Imaging System II (DATIS II). The area of coverage corresponds to the following MassGIS orthophoto quads covering the Boston region (MassGIS orthophoto quad ID: 229890, 229894, 229898, 229902, 233886, 233890, 233894, 233898, 233902, 233906, 233910, 237890, 237894, 237898, 237902, 237906, 237910, 241890, 241894, 241898, 241902, 245898, 245902). The geographic extent of this dataset is the same as that of the MassGIS dataset: Boston, Massachusetts Region 1:5,000 Color Ortho Imagery (1/2-meter Resolution), 2001 and was used to produce the MassGIS dataset: Boston, Massachusetts, 2-Dimensional Building Footprints with Roof Height Data (from LIDAR data), 2002 [see cross references].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dataset consists of 2D footprints of the buildings in the metropolitan Boston area, based on tiles in the orthoimage index (orthophoto quad ID: 229890, 229894, 229898, 229902, 233886, 233890, 233894, 233898, 233902, 237890, 237894, 237898, 237902, 241890, 241894, 241898, 241902, 245898, 245902). This data set was collected using 3Di's Digital Airborne Topographic Imaging System II (DATIS II). Roof height and footprint elevation attributes (derived from 1-meter resolution LIDAR (LIght Detection And Ranging) data) are included as part of each building feature. This data can be combined with other datasets to create 3D representations of buildings and the surrounding environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Permeability of the ocean crust is one of the most crucial parameters for constraining submarine fluid flow systems. Active hydrothermal fields are dynamic areas where fluid flow strongly affects the geochemistry and biology of the surrounding environment. There have been few permeability measurements in these regions, especially in felsic-hosted hydrothermal systems. We present a data set of 38 permeability and porosity measurements from the PACMANUS hydrothermal field, an actively venting, felsic hydrothermal field in the eastern Manus Basin. Permeability was measured using a complex transient method on 2.54-cm minicores. Permeability varies greatly between the samples, spanning over five orders of magnitude. Permeability decreases with both depth and decreasing porosity. When the alteration intensity of individual samples is considered, relationships between depth and porosity and permeability become more clearly defined. For incompletely altered samples (defined as >5% fresh rock), permeability and porosity are constant with depth. For completely altered samples (defined as <5% fresh rock), permeability and porosity decrease with depth. On average, the permeability values from the PACMANUS hydrothermal field are greater than those in other submarine environments using similar core-scale laboratory measurements; the average permeability, 4.5 x 10-16 m**2, is two to four orders of magnitude greater than in other areas. Although the core-scale permeability is higher than in other seafloor environments, it is still too low to obtain the fluid velocities observed in the PACMANUS hydrothermal field based on simplified analytical calculations. It is likely that core-scale permeability measurements are not representative of bulk rock permeability of the hydrothermal system overall, and that the latter is predominantly fracture controlled.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present data set includes 268,127 vertical in situ fluorescence profiles obtained from several available online databases and from published and unpublished individual sources. Metadata about each profiles are given in the file provided here in further details. The majority of profiles comes from the National Oceanographic Data Center (NODC) and the fluorescence profiles acquired by Bio-Argo floats available on the Oceanographic Autonomous Observations (OAO) platform (63.7% and 12.5% respectively). Different modes of acquisition were used to collect the data presented in this study: (1) CTD profiles are acquired using a fluorometer mounted on a CTD-rosette; (2) OSD (Ocean Station Data) profiles are derived from water samples and are defined as low resolution profiles; (3) the UOR (Undulating Oceanographic Recorder) profiles are acquired by a equipped with a fluorometer and towed by a research vessel; (4) PA profiles are acquired by autonomous platforms (here profiling floats or elephant seals equipped with a fluorometer). Data acquired from gliders are not included in the compilation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acoustic and pelagic trawl data were collected during various pelagic surveys carried out by IFREMER in May between 2000 and 2012 (except 2001), on the eastern continental shelf of the Bay of Biscay (Pelgas series). The acoustic data were collected with a Simrad EK60 echosounder operating at 38 kHz (beam angle at -3 dB: 7°, pulse length set to 1.024 ms). The echosounder transducer was mounted on the vessel keel, at 6 m below the sea surface. The sampling design were parallel transects spaced 12 nm apart which were orientated perpendicular to the coast line from 20 m to about 200 m bottom depth. The nominal sailing speed was 10 knots and 3 knots on average during fishing operations. The scrutinising (species identification) of acoustic data was done by first characterising acoustic schools by type and then linking these types with the species composition of specific trawl hauls. The data set contains nautical area backscattering values, biomass and abundance estimates for blue whiting for one nautical mile long transect lines. Further information on the survey design, scrutinising and biomass estimation can be found in Doray et al. 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Microscopic traffic-simulation tools are increasingly being applied to evaluate the impacts of a wide variety of intelligent transport, systems (ITS) applications and other dynamic problems that are difficult to solve using traditional analytical models. The accuracy of a traffic-simulation system depends highly on the quality of the traffic-flow model at its core, with the two main critical components being the car-following and lane-changing models. This paper presents findings from a comparative evaluation of car-following behavior in a number of traffic simulators [advanced interactive microscopic simulator for urban and nonurban networks (AIMSUN), parallel microscopic simulation (PARAMICS), and Verkehr in Statiten-simulation (VISSIM)]. The car-following algorithms used in these simulators have been developed from a variety of theoretical backgrounds and are reported to have been calibrated on a number of different data sets. Very few independent studies have attempted to evaluate the performance of the underlying algorithms based on the same data set. The results reported in this study are based on a car-following experiment that used instrumented vehicles to record the speed and relative distance between follower and leader vehicles on a one-lane road. The experiment was replicated in each tool and the simulated car-following behavior was compared to the field data using a number of error tests. The results showed lower error values for the Gipps-based models implemented in AIMSUN and similar error values for the psychophysical spacing models used in VISSIM and PARAMICS. A qualitative drift and goal-seeking behavior test, which essentially shows how the distance headway between leader and follower vehicles should oscillate around a stable distance, also confirmed the findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Normal mixture models are often used to cluster continuous data. However, conventional approaches for fitting these models will have problems in producing nonsingular estimates of the component-covariance matrices when the dimension of the observations is large relative to the number of observations. In this case, methods such as principal components analysis (PCA) and the mixture of factor analyzers model can be adopted to avoid these estimation problems. We examine these approaches applied to the Cabernet wine data set of Ashenfelter (1999), considering the clustering of both the wines and the judges, and comparing our results with another analysis. The mixture of factor analyzers model proves particularly effective in clustering the wines, accurately classifying many of the wines by location.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a range of dynamic data envelopment analysis (DEA) models which allow information on costs of adjustment to be incorporated into the DEA framework. We first specify a basic dynamic DEA model predicated on a number or simplifying assumptions. We then outline a number of extensions to this model to accommodate asymmetric adjustment costs, non-static output quantities, non-static input prices, and non-static costs of adjustment, technological change, quasi-fixed inputs and investment budget constraints. The new dynamic DEA models provide valuable extra information relative to the standard static DEA models-they identify an optimal path of adjustment for the input quantities, and provide a measure of the potential cost savings that result from recognising the costs of adjusting input quantities towards the optimal point. The new models are illustrated using data relating to a chain of 35 retail department stores in Chile. The empirical results illustrate the wealth of information that can be derived from these models, and clearly show that static models overstate potential cost savings when adjustment costs are non-zero.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To account for the preponderance of zero counts and simultaneous correlation of observations, a class of zero-inflated Poisson mixed regression models is applicable for accommodating the within-cluster dependence. In this paper, a score test for zero-inflation is developed for assessing correlated count data with excess zeros. The sampling distribution and the power of the test statistic are evaluated by simulation studies. The results show that the test statistic performs satisfactorily under a wide range of conditions. The test procedure is further illustrated using a data set on recurrent urinary tract infections. Copyright (c) 2005 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper investigates a Bayesian hierarchical model for the analysis of categorical longitudinal data from a large social survey of immigrants to Australia. Data for each subject are observed on three separate occasions, or waves, of the survey. One of the features of the data set is that observations for some variables are missing for at least one wave. A model for the employment status of immigrants is developed by introducing, at the first stage of a hierarchical model, a multinomial model for the response and then subsequent terms are introduced to explain wave and subject effects. To estimate the model, we use the Gibbs sampler, which allows missing data for both the response and the explanatory variables to be imputed at each iteration of the algorithm, given some appropriate prior distributions. After accounting for significant covariate effects in the model, results show that the relative probability of remaining unemployed diminished with time following arrival in Australia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: An estimation of cut-off points for the diagnosis of diabetes mellitus (DM) based on individual risk factors. Methods: A subset of the 1991 Oman National Diabetes Survey is used, including all patients with a 2h post glucose load >= 200 mg/dl (278 subjects) and a control group of 286 subjects. All subjects previously diagnosed as diabetic and all subjects with missing data values were excluded. The data set was analyzed by use of the SPSS Clementine data mining system. Decision Tree Learners (C5 and CART) and a method for mining association rules (the GRI algorithm) are used. The fasting plasma glucose (FPG), age, sex, family history of diabetes and body mass index (BMI) are input risk factors (independent variables), while diabetes onset (the 2h post glucose load >= 200 mg/dl) is the output (dependent variable). All three techniques used were tested by use of crossvalidation (89.8%). Results: Rules produced for diabetes diagnosis are: A- GRI algorithm (1) FPG>=108.9 mg/dl, (2) FPG>=107.1 and age>39.5 years. B- CART decision trees: FPG >=110.7 mg/dl. C- The C5 decision tree learner: (1) FPG>=95.5 and 54, (2) FPG>=106 and 25.2 kg/m2. (3) FPG>=106 and =133 mg/dl. The three techniques produced rules which cover a significant number of cases (82%), with confidence between 74 and 100%. Conclusion: Our approach supports the suggestion that the present cut-off value of fasting plasma glucose (126 mg/dl) for the diagnosis of diabetes mellitus needs revision, and the individual risk factors such as age and BMI should be considered in defining the new cut-off value.