952 resultados para Meshless Method, Meshfree Method, Convection-Diffusion, Convection Dominated, Numerical Analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To build a theoretical model to configure the network social support experience of people involved in home care. Method: A quantitative approach research, utilizing the Grounded Theory method. The simultaneous data collection and analysis allowed the interpretation of the phenomenon meaning The network social support of people involved in home care. Results: The population passive posture in building their well-being was highlighted. The need of a shared responsibility between the involved parts, population and State is recognized. Conclusion: It is suggested for nurses to be stimulated to amplify home care to attend the demands of caregivers; and to elaborate new studies with different populations, to validate or complement the proposed theoretical model.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a general technique to develop first and second order closed-form approximation formulas for short-time options withrandom strikes. Our method is based on Malliavin calculus techniques andallows us to obtain simple closed-form approximation formulas dependingon the derivative operator. The numerical analysis shows that these formulas are extremely accurate and improve some previous approaches ontwo-assets and three-assets spread options as Kirk's formula or the decomposition mehod presented in Alòs, Eydeland and Laurence (2011).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Power transformations of positive data tables, prior to applying the correspondence analysis algorithm, are shown to open up a family of methods with direct connections to the analysis of log-ratios. Two variations of this idea are illustrated. The first approach is simply to power the original data and perform a correspondence analysis this method is shown to converge to unweighted log-ratio analysis as the power parameter tends to zero. The second approach is to apply the power transformation to thecontingency ratios, that is the values in the table relative to expected values based on the marginals this method converges to weighted log-ratio analysis, or the spectral map. Two applications are described: first, a matrix of population genetic data which is inherently two-dimensional, and second, a larger cross-tabulation with higher dimensionality, from a linguistic analysis of several books.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the scope of the European project Hydroptimet, INTERREG IIIB-MEDOCC programme, limited area model (LAM) intercomparison of intense events that produced many damages to people and territory is performed. As the comparison is limited to single case studies, the work is not meant to provide a measure of the different models' skill, but to identify the key model factors useful to give a good forecast on such a kind of meteorological phenomena. This work focuses on the Spanish flash-flood event, also known as "Montserrat-2000" event. The study is performed using forecast data from seven operational LAMs, placed at partners' disposal via the Hydroptimet ftp site, and observed data from Catalonia rain gauge network. To improve the event analysis, satellite rainfall estimates have been also considered. For statistical evaluation of quantitative precipitation forecasts (QPFs), several non-parametric skill scores based on contingency tables have been used. Furthermore, for each model run it has been possible to identify Catalonia regions affected by misses and false alarms using contingency table elements. Moreover, the standard "eyeball" analysis of forecast and observed precipitation fields has been supported by the use of a state-of-the-art diagnostic method, the contiguous rain area (CRA) analysis. This method allows to quantify the spatial shift forecast error and to identify the error sources that affected each model forecasts. High-resolution modelling and domain size seem to have a key role for providing a skillful forecast. Further work is needed to support this statement, including verification using a wider observational data set.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is widely known that informal contacts and networks constitute a major advantage when searching for a job. Unemployed people are likely to benefit from such informal contacts, but building and sustaining a network can be particularly difficult when out of employment. Interventions that allow unemployed people to effectively strengthen their networking capability could as a result be promising. Against this background, this article provides some hints in relation to the direction that such interventions could take. First, on the basis of data collected on a sample of 4,600 newly-unemployed people in the Swiss Canton of Vaud, it looks at the factors that influence jobseekers' decisions to turn to informal contacts for their job search. The article shows that many unemployed people are not making use of their network because they are unaware of the importance of this method. Second, it presents an impact analysis of an innovative intervention designed to raise awareness of the importance of networks which is tested in a randomized controlled trial setting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new drift compensation method based on Common Principal Component Analysis (CPCA) is proposed. The drift variance in data is found as the principal components computed by CPCA. This method finds components that are common for all gasses in feature space. The method is compared in classification task with respect to the other approaches published where the drift direction is estimated through a Principal Component Analysis (PCA) of a reference gas. The proposed new method ¿ employing no specific reference gas, but information from all gases ¿has shown the same performance as the traditional approach with the best-fitted reference gas. Results are shown with data lasting 7-months including three gases at different concentrations for an array of 17 polymeric sensors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP) and field capacity (FC), is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM) and indirect estimation methods (pedotransfer functions - PTFs). The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a numerical method for spectroscopic ellipsometry of thick transparent films. When an analytical expression for the dispersion of the refractive index (which contains several unknown coefficients) is assumed, the procedure is based on fitting the coefficients at a fixed thickness. Then the thickness is varied within a range (according to its approximate value). The final result given by our method is as follows: The sample thickness is considered to be the one that gives the best fitting. The refractive index is defined by the coefficients obtained for this thickness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: The importance of the micromovements in the mechanism of aseptic loosening is clinically difficult to evaluate. To complete the analysis of a series of total knee arthroplasties (TKA), we used a tridimensional numerical model to study the micromovements of the tibial implant. MATERIAL AND METHODS: Fifty one patients (with 57 cemented Porous Coated Anatomic TKAs) were reviewed (mean follow-up 4.5 year). Radiolucency at the tibial bone-cement interface was sought on the AP radiographs and divided in 7 areas. The distribution of the radiolucency was then correlated with the axis of the lower limb as measured on the orthoradiograms. The tridimensional numerical model is based on the finite element method. It allowed the measurement of the cemented prosthetic tibial implant's displacements and the micromovements generated at bone-ciment interface. A total load (2000 Newton) was applied at first vertically and asymetrically on the tibial plateau, thereby simulating an axial deviation of the lower limbs. The vector's posterior inclination then permitted the addition of a tangential component to the axial load. This type of effort is generated by complex biomechanical phenomena such as knee flexion. RESULTS: 81 per cent of the 57 knees had a radiolucent line of at least 1 mm, at one or more of the tibial cement-epiphysis jonctional areas. The distribution of these lucent lines showed that they came out more frequently at the periphery of the implant. The lucent lines appeared most often under the unloaded margin of the tibial plateau, when axial deviation of lower limbs was present. Numerical simulations showed that asymetrical loading on the tibial plateau induced a subsidence of the loaded margin (0-100 microns) and lifting off at the opposite border (0-70 microns). The postero-anterior tangential component induced an anterior displacement of the tibial implant (160-220 microns), and horizontal micromovements with non homogenous distribution at the bone-ciment interface (28-54 microns). DISCUSSION: Comparison of clinical and numerical results showed a relation between the development of radiolucent lines and the unloading of the tibial implant's margin. The deleterious effect of lower limbs' axial deviation is thereby proven. The irregular distribution of lucent lines under the tibial plateau was similar of the micromovements' repartition at the bone-cement interface when tangential forces were present. A causative relation between the two phenomenaes could not however be established. Numerical simulation is a truly useful method of study; it permits to calculate micromovements which are relative, non homogenous and of very low amplitude. However, comparative clinical studies remain as essential to ensure the credibility of results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study details a method to statistically determine, on a millisecond scale and for individual subjects, those brain areas whose activity differs between experimental conditions, using single-trial scalp-recorded EEG data. To do this, we non-invasively estimated local field potentials (LFPs) using the ELECTRA distributed inverse solution and applied non-parametric statistical tests at each brain voxel and for each time point. This yields a spatio-temporal activation pattern of differential brain responses. The method is illustrated here in the analysis of auditory-somatosensory (AS) multisensory interactions in four subjects. Differential multisensory responses were temporally and spatially consistent across individuals, with onset at approximately 50 ms and superposition within areas of the posterior superior temporal cortex that have traditionally been considered auditory in their function. The close agreement of these results with previous investigations of AS multisensory interactions suggests that the present approach constitutes a reliable method for studying multisensory processing with the temporal and spatial resolution required to elucidate several existing questions in this field. In particular, the present analyses permit a more direct comparison between human and animal studies of multisensory interactions and can be extended to examine correlation between electrophysiological phenomena and behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Refering to systems theory, we identify a supraindividual property in interactions between therapist and couple. We use gaze directions to describe the partners' behaviors and label this property the "mutual attending frame." We propose a procedure to observe triadic interactions in a consultation setting and a method to measure mutual attending. The method is illustrated by the data analysis of two triads contrasted on measures of therapeutic alliance. We discuss the potential of this method for the description of the interactive aspects of the therapeutic alliance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the numerical treatment of the optical flow problem by evaluating the performance of the trust region method versus the line search method. To the best of our knowledge, the trust region method is studied here for the first time for variational optical flow computation. Four different optical flow models are used to test the performance of the proposed algorithm combining linear and nonlinear data terms with quadratic and TV regularization. We show that trust region often performs better than line search; especially in the presence of non-linearity and non-convexity in the model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work provides a general framework for the design of second-order blind estimators without adopting anyapproximation about the observation statistics or the a prioridistribution of the parameters. The proposed solution is obtainedminimizing the estimator variance subject to some constraints onthe estimator bias. The resulting optimal estimator is found todepend on the observation fourth-order moments that can be calculatedanalytically from the known signal model. Unfortunately,in most cases, the performance of this estimator is severely limitedby the residual bias inherent to nonlinear estimation problems.To overcome this limitation, the second-order minimum varianceunbiased estimator is deduced from the general solution by assumingaccurate prior information on the vector of parameters.This small-error approximation is adopted to design iterativeestimators or trackers. It is shown that the associated varianceconstitutes the lower bound for the variance of any unbiasedestimator based on the sample covariance matrix.The paper formulation is then applied to track the angle-of-arrival(AoA) of multiple digitally-modulated sources by means ofa uniform linear array. The optimal second-order tracker is comparedwith the classical maximum likelihood (ML) blind methodsthat are shown to be quadratic in the observed data as well. Simulationshave confirmed that the discrete nature of the transmittedsymbols can be exploited to improve considerably the discriminationof near sources in medium-to-high SNR scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article we propose a novel method for calculating cardiac 3-D strain. The method requires the acquisition of myocardial short-axis (SA) slices only and produces the 3-D strain tensor at every point within every pair of slices. Three-dimensional displacement is calculated from SA slices using zHARP which is then used for calculating the local displacement gradient and thus the local strain tensor. There are three main advantages of this method. First, the 3-D strain tensor is calculated for every pixel without interpolation; this is unprecedented in cardiac MR imaging. Second, this method is fast, in part because there is no need to acquire long-axis (LA) slices. Third, the method is accurate because the 3-D displacement components are acquired simultaneously and therefore reduces motion artifacts without the need for registration. This article presents the theory of computing 3-D strain from two slices using zHARP, the imaging protocol, and both phantom and in-vivo validation.