940 resultados para second-order analysis
Resumo:
In this paper, we address issues in segmentation Of remotely sensed LIDAR (LIght Detection And Ranging) data. The LIDAR data, which were captured by airborne laser scanner, contain 2.5 dimensional (2.5D) terrain surface height information, e.g. houses, vegetation, flat field, river, basin, etc. Our aim in this paper is to segment ground (flat field)from non-ground (houses and high vegetation) in hilly urban areas. By projecting the 2.5D data onto a surface, we obtain a texture map as a grey-level image. Based on the image, Gabor wavelet filters are applied to generate Gabor wavelet features. These features are then grouped into various windows. Among these windows, a combination of their first and second order of statistics is used as a measure to determine the surface properties. The test results have shown that ground areas can successfully be segmented from LIDAR data. Most buildings and high vegetation can be detected. In addition, Gabor wavelet transform can partially remove hill or slope effects in the original data by tuning Gabor parameters.
Resumo:
A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion combining local component analysis for the finite mixture model. We start with a Parzen window estimator which has the Gaussian kernels with a common covariance matrix, the local component analysis is initially applied to find the covariance matrix using expectation maximization algorithm. Since the constraint on the mixing coefficients of a finite mixture model is on the multinomial manifold, we then use the well-known Riemannian trust-region algorithm to find the set of sparse mixing coefficients. The first and second order Riemannian geometry of the multinomial manifold are utilized in the Riemannian trust-region algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with competitive accuracy to existing kernel density estimators.
Resumo:
This work evaluated the effect of pressure and temperature on yield and characteristic flavour intensity of Brazilian cherry (Eugenia uniflora L) extracts obtained by supercritical CO(2) using response surface analysis, which is a simple and efficient method for first inquiries. A complete central composite 2(2) factorial experimental design was applied using temperature (ranging from 40 to 60 degrees C) and pressure (from 150 to 250 bar) as independent variables. A second order model proved to be predictive (p <= 0.05) for the extract yield as affected by pressure and temperature, with better results being achieved at the central point (200 bar and 50 degrees C). For the flavour intensity, a first order model proved to be predictive (p <= 0.05) showing the influence of temperature. Greater characteristic flavour intensity in extracts was obtained for relatively high temperature (> 50 degrees C), Therefore, as far as Brazilian cherry is concerned, optimum conditions for achieving higher extract yield do not necessarily coincide to those for obtaining richer flavour intensity. Industrial relevance: Supercritical fluid extraction (SFE) is an emerging clean technology through which one may obtain extracts free from organic solvents. Extract yields from natural products for applications in food, pharmaceutical and cosmetic industries have been widely disseminated in the literature. Accordingly, two lines of research have industrial relevance, namely, (i) operational optimization studies for high SFE yields and (ii) investigation on important properties extracts are expected to present (so as to define their prospective industrial application). Specifically, this work studied the optimization of SFE process to obtain extracts from a tropical fruit showing high intensity of its characteristic flavour, aiming at promoting its application in natural aroma enrichment of processed foods. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
This work is an assessment of frequency of extreme values (EVs) of daily rainfall in the city of Sao Paulo. Brazil, over the period 1933-2005, based on the peaks-over-threshold (POT) and Generalized Pareto Distribution (GPD) approach. Usually. a GPD model is fitted to a sample of POT Values Selected With a constant threshold. However. in this work we use time-dependent thresholds, composed of relatively large p quantities (for example p of 0.97) of daily rainfall amounts computed from all available data. Samples of POT values were extracted with several Values of p. Four different GPD models (GPD-1, GPD-2, GPD-3. and GDP-4) were fitted to each one of these samples by the maximum likelihood (ML) method. The shape parameter was assumed constant for the four models, but time-varying covariates were incorporated into scale parameter of GPD-2. GPD-3, and GPD-4, describing annual cycle in GPD-2. linear trend in GPD-3, and both annual cycle and linear trend in GPD-4. The GPD-1 with constant scale and shape parameters is the simplest model. For identification of the best model among the four models WC used rescaled Akaike Information Criterion (AIC) with second-order bias correction. This criterion isolates GPD-3 as the best model, i.e. the one with positive linear trend in the scale parameter. The slope of this trend is significant compared to the null hypothesis of no trend, for about 98% confidence level. The non-parametric Mann-Kendall test also showed presence of positive trend in the annual frequency of excess over high thresholds. with p-value being virtually zero. Therefore. there is strong evidence that high quantiles of daily rainfall in the city of Sao Paulo have been increasing in magnitude and frequency over time. For example. 0.99 quantiles of daily rainfall amount have increased by about 40 mm between 1933 and 2005. Copyright (C) 2008 Royal Meteorological Society
Resumo:
It is often necessary to run response surface designs in blocks. In this paper the analysis of data from such experiments, using polynomial regression models, is discussed. The definition and estimation of pure error in blocked designs are considered. It is recommended that pure error is estimated by assuming additive block and treatment effects, as this is more consistent with designs without blocking. The recovery of inter-block information using REML analysis is discussed, although it is shown that it has very little impact if thc design is nearly orthogonally blocked. Finally prediction from blocked designs is considered and it is shown that prediction of many quantities of interest is much simpler than prediction of the response itself.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This work is an assessment of frequency of extreme values (EVs) of daily rainfall in the city of São Paulo. Brazil, over the period 1933-2005, based on the peaks-over-threshold (POT) and Generalized Pareto Distribution (GPD) approach. Usually. a GPD model is fitted to a sample of POT Values Selected With a constant threshold. However. in this work we use time-dependent thresholds, composed of relatively large p quantities (for example p of 0.97) of daily rainfall amounts computed from all available data. Samples of POT values were extracted with several Values of p. Four different GPD models (GPD-1, GPD-2, GPD-3. and GDP-4) were fitted to each one of these samples by the maximum likelihood (ML) method. The shape parameter was assumed constant for the four models, but time-varying covariates were incorporated into scale parameter of GPD-2. GPD-3, and GPD-4, describing annual cycle in GPD-2. linear trend in GPD-3, and both annual cycle and linear trend in GPD-4. The GPD-1 with constant scale and shape parameters is the simplest model. For identification of the best model among the four models WC used rescaled Akaike Information Criterion (AIC) with second-order bias correction. This criterion isolates GPD-3 as the best model, i.e. the one with positive linear trend in the scale parameter. The slope of this trend is significant compared to the null hypothesis of no trend, for about 98% confidence level. The non-parametric Mann-Kendall test also showed presence of positive trend in the annual frequency of excess over high thresholds. with p-value being virtually zero. Therefore. there is strong evidence that high quantiles of daily rainfall in the city of São Paulo have been increasing in magnitude and frequency over time. For example. 0.99 quantiles of daily rainfall amount have increased by about 40 mm between 1933 and 2005. Copyright (C) 2008 Royal Meteorological Society
Resumo:
College student burnout has been assessed mainly with the Maslach Burnout Inventory (MBI). However, the construct's definition and measurement with MBI has drawn several criticisms and new inventories have been suggested for the evaluation of the syndrome. A redefinition of the construct of student burnout is proposed by means of a structural equation model, reflecting burnout as a second order factor defined by factors from the MBI-Student Survey (MBI-SS); the Copenhagen Burnout Inventory-Student Survey (CBI-SS) and the Oldenburg Burnout Inventory-Student Survey (OLBI-SS). Standardized regression weights from Burnout to Exhaustion and Cynicism from the MBI-SS scale, Personal Burnout and Studies Related Burnout from the CBI, and Exhaustion and Disengagement from OLBI, show that these factors are strong manifestations of students' burnout. For college students, the burnout construct is best defined by two dimensions described as "physical and psychological exhaustion" and "cynicism and disengagement."
Resumo:
A fourth-order numerical method for solving the Navier-Stokes equations in streamfunction/vorticity formulation on a two-dimensional non-uniform orthogonal grid has been tested on the fluid flow in a constricted symmetric channel. The family of grids is generated algebraically using a conformal transformation followed by a non-uniform stretching of the mesh cells in which the shape of the channel boundary can vary from a smooth constriction to one which one possesses a very sharp but smooth corner. The generality of the grids allows the use of long channels upstream and downstream as well as having a refined grid near the sharp corner. Derivatives in the governing equations are replaced by fourth-order central differences and the vorticity is eliminated, either before or after the discretization, to form a wide difference molecule for the streamfunction. Extra boundary conditions, necessary for wide-molecule methods, are supplied by a procedure proposed by Henshaw et al. The ensuing set of non-linear equations is solved using Newton iteration. Results have been obtained for Reynolds numbers up to 250 for three constrictions, the first being smooth, the second having a moderately sharp corner and the third with a very sharp corner. Estimates of the error incurred show that the results are very accurate and substantially better than those of the corresponding second-order method. The observed order of the method has been shown to be close to four, demonstrating that the method is genuinely fourth-order. © 1977 John Wiley & Sons, Ltd.
Resumo:
This Doctoral Thesis focuses on the study of individual behaviours as a result of organizational affiliation. The objective is to assess the Entrepreneurial Orientation of individuals proving the existence of a set of antecedents to that measure returning a structural model of its micro-foundation. Relying on the developed measurement model, I address the issue whether some Entrepreneurs experience different behaviours as a result of their academic affiliation, comparing a sample of ‘Academic Entrepreneurs’ to a control sample of ‘Private Entrepreneurs’ affiliated to a matched sample of Academic Spin-offs and Private Start-ups. Building on the Theory of the Planned Behaviour, proposed by Ajzen (1991), I present a model of causal antecedents of Entrepreneurial Orientation on constructs extensively used and validated, both from a theoretical and empirical perspective, in sociological and psychological studies. I focus my investigation on five major domains: (a) Situationally Specific Motivation, (b) Personal Traits and Characteristics, (c) Individual Skills, (d) Perception of the Business Environment and (e) Entrepreneurial Orientation Related Dimensions. I rely on a sample of 200 Entrepreneurs, affiliated to a matched sample of 72 Academic Spin-offs and Private Start-ups. Firms are matched by Industry, Year of Establishment and Localization and they are all located in the Emilia Romagna region, in northern Italy. I’ve gathered data by face to face interviews and used a Structural Equation Modeling technique (Lisrel 8.80, Joreskog, K., & Sorbom, D. 2006) to perform the empirical analysis. The results show that Entrepreneurial Orientation is a multi-dimensional micro-founded construct which can be better represented by a Second-Order Model. The t-tests on the latent means reveal that the Academic Entrepreneurs differ in terms of: Risk taking, Passion, Procedural and Organizational Skills, Perception of the Government, Context and University Supports. The Structural models also reveal that the main differences between the two groups lay in the predicting power of Technical Skills, Perceived Context Support and Perceived University Support in explaining the Entrepreneurial Orientation Related Dimensions.
Resumo:
We report a detailed physical analysis on a family of isolated, antiferro-magnetically (AF) coupled, chromium(III) finite chains, of general formula (Cr(RCO(2))(2)F)(n) where the chain length n = 6 or 7. Additionally, the chains are capped with a selection of possible terminating ligands, including hfac (= 1,1,1,5,5,5-hexafluoropentane-2,4-dionate(1-)), acac (= pentane-2,4-dionate(1-)) or (F)(3). Measurements by inelastic neutron scattering (INS), magnetometery and electron paramagnetic resonance (EPR) spectroscopy have been used to study how the electronic properties are affected by n and capping ligand type. These comparisons allowed the subtle electronic effects the choice of capping ligand makes for odd member spin 3/2 ground state and even membered spin 0 ground state chains to be investigated. For this investigation full characterisation of physical properties have been performed with spin Hamiltonian parameterisation, including the determination of Heisenberg exchange coupling constants and single ion axial and rhombic anisotropy. We reveal how the quantum spin energy levels of odd or even membered chains can be modified by the type of capping ligand terminating the chain. Choice of capping ligands enables Cr-Cr exchange coupling to be adjusted by 0, 4 or 24%, relative to Cr-Cr exchange coupling within the body of the chain, by the substitution of hfac, acac or (F)(3) capping ligands to the ends of the chain, respectively. The manipulation of quantum spin levels via ligands which play no role in super-exchange, is of general interest to the practise of spin Hamilton modelling, where such second order effects are generally not considered of relevance to magnetic properties.
Resumo:
The purpose of this research project is to study an innovative method for the stability assessment of structural steel systems, namely the Modified Direct Analysis Method (MDM). This method is intended to simplify an existing design method, the Direct Analysis Method (DM), by assuming a sophisticated second-order elastic structural analysis will be employed that can account for member and system instability, and thereby allow the design process to be reduced to confirming the capacity of member cross-sections. This last check can be easily completed by substituting an effective length of KL = 0 into existing member design equations. This simplification will be particularly useful for structural systems in which it is not clear how to define the member slenderness L/r when the laterally unbraced length L is not apparent, such as arches and the compression chord of an unbraced truss. To study the feasibility and accuracy of this new method, a set of 12 benchmark steel structural systems previously designed and analyzed by former Bucknell graduate student Jose Martinez-Garcia and a single column were modeled and analyzed using the nonlinear structural analysis software MASTAN2. A series of Matlab-based programs were prepared by the author to provide the code checking requirements for investigating the MDM. By comparing MDM and DM results against the more advanced distributed plasticity analysis results, it is concluded that the stability of structural systems can be adequately assessed in most cases using MDM, and that MDM often appears to be a more accurate but less conservative method in assessing stability.
Resumo:
In numerous intervention studies and education field trials, random assignment to treatment occurs in clusters rather than at the level of observation. This departure of random assignment of units may be due to logistics, political feasibility, or ecological validity. Data within the same cluster or grouping are often correlated. Application of traditional regression techniques, which assume independence between observations, to clustered data produce consistent parameter estimates. However such estimators are often inefficient as compared to methods which incorporate the clustered nature of the data into the estimation procedure (Neuhaus 1993).1 Multilevel models, also known as random effects or random components models, can be used to account for the clustering of data by estimating higher level, or group, as well as lower level, or individual variation. Designing a study, in which the unit of observation is nested within higher level groupings, requires the determination of sample sizes at each level. This study investigates the design and analysis of various sampling strategies for a 3-level repeated measures design on the parameter estimates when the outcome variable of interest follows a Poisson distribution. ^ Results study suggest that second order PQL estimation produces the least biased estimates in the 3-level multilevel Poisson model followed by first order PQL and then second and first order MQL. The MQL estimates of both fixed and random parameters are generally satisfactory when the level 2 and level 3 variation is less than 0.10. However, as the higher level error variance increases, the MQL estimates become increasingly biased. If convergence of the estimation algorithm is not obtained by PQL procedure and higher level error variance is large, the estimates may be significantly biased. In this case bias correction techniques such as bootstrapping should be considered as an alternative procedure. For larger sample sizes, those structures with 20 or more units sampled at levels with normally distributed random errors produced more stable estimates with less sampling variance than structures with an increased number of level 1 units. For small sample sizes, sampling fewer units at the level with Poisson variation produces less sampling variation, however this criterion is no longer important when sample sizes are large. ^ 1Neuhaus J (1993). “Estimation efficiency and Tests of Covariate Effects with Clustered Binary Data”. Biometrics , 49, 989–996^
Resumo:
The aim of the present work is to provide an in-depth analysis of the most representative mirroring techniques used in SPH to enforce boundary conditions (BC) along solid profiles. We specifically refer to dummy particles, ghost particles, and Takeda et al. [Prog. Theor. Phys. 92 (1994), 939] boundary integrals. The analysis has been carried out by studying the convergence of the first- and second-order differential operators as the smoothing length (that is, the characteristic length on which relies the SPH interpolation) decreases. These differential operators are of fundamental importance for the computation of the viscous drag and the viscous/diffusive terms in the momentum and energy equations. It has been proved that close to the boundaries some of the mirroring techniques leads to intrinsic inaccuracies in the convergence of the differential operators. A consistent formulation has been derived starting from Takeda et al. boundary integrals (see the above reference). This original formulation allows implementing no-slip boundary conditions consistently in many practical applications as viscous flows and diffusion problems.
Resumo:
The development of a global instability analysis code coupling a time-stepping approach, as applied to the solution of BiGlobal and TriGlobal instability analysis 1, 2 and finite-volume-based spatial discretization, as used in standard aerodynamics codes is presented. The key advantage of the time-stepping method over matrix-formulation approaches is that the former provides a solution to the computer-storage issues associated with the latter methodology. To-date both approaches are successfully in use to analyze instability in complex geometries, although their relative advantages have never been quantified. The ultimate goal of the present work is to address this issue in the context of spatial discretization schemes typically used in industry. The time-stepping approach of Chiba 3 has been implemented in conjunction with two direct numerical simulation algorithms, one based on the typically-used in this context high-order method and another based on low-order methods representative of those in common use in industry. The two codes have been validated with solutions of the BiGlobal EVP and it has been showed that small errors in the base flow do not have affect significantly the results. As a result, a three-dimensional compressible unsteady second-order code for global linear stability has been successfully developed based on finite-volume spatial discretization and time-stepping method with the ability to study complex geometries by means of unstructured and hybrid meshes