842 resultados para non-parametric background modeling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The strategy research have been widespread for many years and, more recently, the process of formation of the strategies in the individual perspective has also gained attention in academia. Confirming this trend, the goal of this study is to discuss the process of formation of the strategies from an individual perspective based on the three dimensions of the strategic process (change, thinking and formation) proposed by De Wit and Meyer (2004). To this end, this exploratory-descriptive study used the factor analysis techniques, non-parametric correlation and linear regression to analyze data collected from the decision makers of the 93 retail in the industry of construction supplies in the Natal and metropolitan area. As a result, we have that the formation factors of the dimensions investigated were identified in the majority, thus confirming the existence of paradoxes in the strategic process, and that there is a relationship between logical thinking and deliberate formation with the hierarchical level of decision makers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Academic demands, new social context, new routines and decrease of the parental control, are factors that may influence the sleep pattern of freshman students at the University. Medical students from the Federal University of Rio Grande do Norte (UFRN) have a full-time course, subjects with high-level content, and, at the first semester, classes begin at 7 a.m. This group composed by young adults who still suffering with delayed sleep phase, common in adolescence, indicating that this class schedule can be inappropriate at this age. The reduction of nocturnal sleep during school days, and the attempt to recover sleep on free days – social jet lag (JLS), suggests that in the first semester, students suffer from high sleep pressure. High sleep pressure may reflect on cognitive tasks and performance. Therefore, the aim of this study was to investigate the relationship between sleep pressure and the academic profile of medical students from the first semester of UFRN, characterizing this population socio-demographically and investigating possible impacts on therestactivity rhytm and academic performance. A sample of 88 students, healthy men and women awswered the following questionnaires: Pittsburgh Sleep Quality (PSQI), Epworth Sleepiness Scale (ESS), Horne & Ostberg Chronotype (HO), Munich Chronotype (MCTQ) and “Health and Sleep” adapted. Actigraphy was used during 14 days to make actogramas and obtain non-parametric variables of the rest-activity rhythm and the grades of the morning schedule were used as academic performance. The JLS was used as a measure of sleep pressure. Statistics significance level was 95%. The population was sociodemographic homogeneous. Most students have healthy lifestyle, practice physical activity, use car to go to the university and take between 15 and 30 minutes for this route. Regarding CSV, most were classify as intermediate (38.6%) and evening (32%) chronotypes, needs to nap during the week, suffer daytime sleepiness and have poor sleep quality. 83% of the sample has at least 1h JLS, which led us to divide into two groups: Group <2h JLS (N = 44) and Group ≥ 2h JLS (N = 44). The groups have differences only in chronotype, showing that most evening individuals have more JLS, however, no differences were found in relation to sociodemographic aspect, rest-activity rhythm or academic performance. The homogeneity of the sample was limited to compare the groups, however, is alarming that students already present in the first half: JLG, poor sleep quality and excessive daytime sleepiness, which can be accentuated through the university years, with the emergence of night shifts and increased academic demand. Interventionsaddressingthe importance of good sleep habits and the change of the class start time are strategies aimed to improve student’s health.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this research was to investigate monthly climatological, seasonal, annual and interdecadal of the reference evapotranspiration (ETo) in Acre state in order to better understand its spatial and temporal variability and identify possible trends in the region. The study was conducted with data from Rio Branco municipalities, the state capital, Tarauacá and Cruzeiro do Sul considering a 30-year period (1985-2014), from monthly data from weather stations surface of the National Institute of Meteorology. The methodology was held, first, the consistency of meteorological data. Thus, it was made the gap filling in the time series by means of multivariate techniques. Subsequently were performed statistical tests trend (Mann-Kendall) and homogeneity, by Sen's estimator of the magnitude of this trend is estimated, as well as computational algorithms containing parametric and non-parametric tests for two samples to identify from that year the trend has become significant. Finally, analysis of variance technique (ANOVA) was adopted in order to verify whether there were significant differences in average annual evapotranspiration between locations. The indirect method of Penman-Montheith parameterized by FAO was used to calculate the ETo. The results of this work through examination of the descriptive statistics showed that the ETo the annual average was 3.80, 2.92 and 2.86 mm day-1 year, to Rio Branco, Tarauacá and Cruzeiro do Sul, respectively. Featuring quite remarkable seasonal pattern with a minimum in June and a maximum in October, with Rio Branco to town one with the strongest signal (amplitudes) on the other hand, the Southern Cross presented the highest variability among the studied locations. By ANOVA it was found that the average annual statistically different for a significance level of 1% between locations, but the annual average between Cruzeiro do Sul and Tarauacá no statistically significant differences. For the three locations, the 2000s was the one with the highest ETo values associated with warmer waters of the North Atlantic basin and the 80s to lower values, associated with cooler waters of this basin. By analyzing the Mann-kendall and Sen estimator test, there was a trend of increasing the seasonal reference evapotranspiration (fall, winter and spring) on the order of 0.11 mm per decade and that from the years of 1990, 1996 and 2001 became statistically significant to the localities of Cruzeiro do Sul Tarauacá and Rio Branco, respectively. For trend analysis of meteorological parameters was observed positive trend in the 5% level of significance, for average temperature, minimum temperature and solar radiation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The variability / climate change has generated great concern worldwide, is one of the major issues as global warming, which can is affecting the availability of water resources in irrigated perimeters. In the semiarid region of Northeastern Brazil it is known that there is a predominance of drought, but it is not enough known about trends in climate series of joint water loss by evaporation and transpiration (evapotranspiration). Therefore, this study aimed to analyze whether there is increase and / or decrease evidence in the regime of reference evapotranspiration (ETo), for the monthly, annual and interdecadal scales in irrigated polo towns of Juazeiro, BA (9 ° 24'S, 40 ° 26'W and 375,5m) and Petrolina, PE (09 ° 09'S, 40 ° 22'W and 376m), which is the main analysis objective. The daily meteorological data were provided by EMBRAPA Semiárido for the period from 01.01.1976 to 31.12.2014, estimated the daily ETo using the standard method of Penman-Monteith (EToPM) parameterized by Smith (1991). Other methods of more simplified estimatives were calculated and compared to EToPM, as the ones following: Solar Radiation (EToRS), Linacre (EToL), Hargreaves and Samani (EToHS) and the method of Class A pan (EToTCA). The main statistical analysis were non-parametric tests of homogeneity (Run), trend (Mann-kendall), magnitude of the trend (Sen) and early trend detection (Mann-Whitney). The statistical significance adopted was 5 and / or 1%. The Analysis of Variance - ANOVA was used to detect if there is a significant difference in mean interdecadal mean. For comparison between the methods of ETo, it were used the correlation test (r), the Student t test and Tukey levels of 5% significance. Finally, statistics Willmott et al. (1985) statistics was used to evaluate the concordance index and performance of simplified methods compared to the standard method. It obtained as main results that there was a decrease in the time series of EToPM in irrigated areas of Juazeiro, BA and Petrolina, PE, significant respectively at 1 and 5%, with an annual magnitude of -14.5 mm (Juazeiro) and -7.7 mm (Petrolina) and early trend in 1996. The methods which had better for better agreement with EToPM were EToRS with very good performance, in both locations, followed by the method of EToL with good performance (Juazeiro) and median (Petrolina). EToHS had the worst performance (bad) for both locations. It is suggested that this decrease of EToPM can be associated with the increase in irrigated agricultural areas and the construction of Sobradinho lake upstream of the perimeters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis stems from the project with real-time environmental monitoring company EMSAT Corporation. They were looking for methods to automatically ag spikes and other anomalies in their environmental sensor data streams. The problem presents several challenges: near real-time anomaly detection, absence of labeled data and time-changing data streams. Here, we address this problem using both a statistical parametric approach as well as a non-parametric approach like Kernel Density Estimation (KDE). The main contribution of this thesis is extending the KDE to work more effectively for evolving data streams, particularly in presence of concept drift. To address that, we have developed a framework for integrating Adaptive Windowing (ADWIN) change detection algorithm with KDE. We have tested this approach on several real world data sets and received positive feedback from our industry collaborator. Some results appearing in this thesis have been presented at ECML PKDD 2015 Doctoral Consortium.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes an allocation Malmquist index which is inspired by the work on the non-parametric cost Malmquist index. We first show that how to decompose the cost Malmquist index into the input-oriented Malmquist index and the allocation Malmquist index. An application in corporate management of the China securities industry with the panel data set of 40 securities companies during the period 2005–2011 shows the practicality of the propose model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective. The prevalence of smoking in Aboriginal Canadians is higher than non-Aboriginal Canadians, a behavior that also tends to alter dietary patterns. Compared with the general Canadian population, maternal smoking rates are almost twice as high. The aim of this study was to compare dietary adequacy of Inuvialuit women of childbearing age comparing smokers versus non-smokers. Research methods & procedures. A cross-sectional study, where participants completed a culturally specific quantitative food frequency questionnaire. Non-parametric analysis was used to compare mean nutrient intake, dietary inadequacy and differences in nutrient density among smokers and non-smokers. Multiple logistic regression analyses were performed for key nutrients inadequacy and smoking status. Data was collected from three communities in the Beaufort Delta region of the Northwest Territories, Canada from randomly selected Inuvialuit women of childbearing age (19-44 years). Results: Of 92 participants, 75% reported being smokers. There were no significant differences in age, BMI, marital status, education, number of people in household working and/or number of self employed, and physical activity between smokers and non-smokers. Non-parametric analysis showed no differences in nutrient intake between smokers and non-smokers. Logistic regression however revealed there was a positive association between smoking and inadequacies of vitamin C (OR = 2.91, 95% CI, 1.17-5.25), iron (OR = 3.16, 95% CI, 1.27-5.90), and zinc (OR = 2.78, 95% CI, 1.12-4.94). A high percentage of women (>60%), regardless of smoking status, did not meet the dietary recommendations for fiber, vitamin D, E and potassium. Conclusions: This study provides evidence of inadequate dietary intake among Inuvialuit of childbearing age regardless of smoking behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aims at exploring the potential impact of forest protection intervention on rural households’ private fuel tree planting in Chiro district of eastern Ethiopia. The study results revealed a robust and significant positive impact of the intervention on farmers’ decisions to produce private household energy by growing fuel trees on their farm. As participation in private fuel tree planting is not random, the study confronts a methodological issue in investigating the causal effect of forest protection intervention on rural farm households’ private fuel tree planting through non-parametric propensity score matching (PSM) method. The protection intervention on average has increased fuel tree planting by 503 (580.6%) compared to open access areas and indirectly contributed to slowing down the loss of biodiversity in the area. Land cover/use is a dynamic phenomenon that changes with time and space due to anthropogenic pressure and development. Forest cover and land use changes in Chiro District, Ethiopia over a period of 40 years was studied using remotely sensed data. Multi temporal satellite data of Landsat was used to map and monitor forest cover and land use changes occurred during three point of time of 1972,1986 and 2012. A pixel base supervised image classification was used to map land use land cover classes for maps of both time set. The result of change detection analysis revealed that the area has shown a remarkable land cover/land use changes in general and forest cover change in particular. Specifically, the dense forest cover land declined from 235 ha in 1972 to 51 ha in 1986. However, government interventions in forest protection in 1989 have slowed down the drastic change of dense forest cover loss around the protected area through reclaiming 1,300 hectares of deforested land through reforestation program up to 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an extensive photometric catalog for 548 CALIFA galaxies observed as of the summer of 2015. CALIFA is currently lacking photometry matching the scale and diversity of its spectroscopy; this work is intended to meet all photometric needs for CALIFA galaxies while also identifying best photometric practices for upcoming integral field spectroscopy surveys such as SAMI and MaNGA. This catalog comprises gri surface brightness profiles derived from Sloan Digital Sky Survey (SDSS) imaging, a variety of non-parametric quantities extracted from these pro files, and parametric models fitted to the i-band pro files (1D) and original galaxy images (2D). To compliment our photometric analysis, we contrast the relative performance of our 1D and 2D modelling approaches. The ability of each measurement to characterize the global properties of galaxies is quantitatively assessed, in the context of constructing the tightest scaling relations. Where possible, we compare our photometry with existing photometrically or spectroscopically obtained measurements from the literature. Close agreement is found with Walcher et al. (2014), the current source of basic photometry and classifications of CALIFA galaxies, while comparisons with spectroscopically derived quantities reveals the effect of CALIFA's limited field of view compared to broadband imaging surveys such as the SDSS. The colour-magnitude diagram, star formation main sequence, and Tully-Fisher relation of CALIFA galaxies are studied, to give a small example of the investigations possible with this rich catalog. We conclude with a discussion of points of concern for ongoing integral field spectroscopy surveys and directions for future expansion and exploitation of this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identifying 20th-century periodic coastal surge variation is strategic for the 21st-century coastal surge estimates, as surge periodicities may amplify/reduce future MSL enhanced surge forecasts. Extreme coastal surge data from Belfast Harbour (UK) tide gauges are available for 1901–2010 and provide the potential for decadal-plus periodic coastal surge analysis. Annual extreme surge-elevation distributions (sampled every 10-min) are analysed using PCA and cluster analysis to decompose variation within- and between-years to assess similarity of years in terms of Surge Climate Types, and to establish significance of any transitions in Type occurrence over time using non-parametric Markov analysis. Annual extreme surge variation is shown to be periodically organised across the 20th century. Extreme surge magnitude and distribution show a number of significant cyclonic induced multi-annual (2, 3, 5 & 6 years) cycles, as well as dominant multi-decadal (15–25 years) cycles of variation superimposed on an 80 year fluctuation in atmospheric–oceanic variation across the North Atlantic (relative to NAO/AMO interaction). The top 30 extreme surge events show some relationship with NAO per se, given that 80% are associated with westerly dominant atmospheric flows (+ NAO), but there are 20% of the events associated with blocking air massess (− NAO). Although 20% of the top 30 ranked positive surges occurred within the last twenty years, there is no unequivocal evidence of recent acceleration in extreme surge magnitude related to other than the scale of natural periodic variation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’isolement avec ou sans contention (IC) en milieu psychiatrique touche près d’un patient sur quatre au Québec (Dumais, Larue, Drapeau, Ménard, & Giguère-Allard, 2011). Il est pourtant largement documenté que cette pratique porte préjudice aux patients, aux infirmières et à l’organisation (Stewart, Van der Merwe, Bowers, Simpson, & Jones, 2010). Cette mesure posant un problème éthique fait l’objet de politiques visant à la restreindre, voire à l’éliminer. Les études sur l’expérience de l’isolement du patient de même que sur la perception des infirmières identifient le besoin d'un retour sur cet évènement. Plusieurs équipes de chercheurs proposent un retour post-isolement (REPI) intégrant à la fois l’équipe traitante, plus particulièrement les infirmières, et le patient comme intervention afin de diminuer l’incidence de l’IC. Le REPI vise l’échange émotionnel, l’analyse des étapes ayant mené à la prise de décision d’IC et la projection des interventions futures. Le but de cette étude était de développer, implanter et évaluer le REPI auprès des intervenants et des patients d’une unité de soins psychiatriques aigus afin d’améliorer leur expérience de soins. Les questions de recherche étaient : 1) Quel est le contexte d’implantation du REPI? 2) Quels sont les éléments facilitants et les obstacles à l’implantation du REPI selon les patients et les intervenants? 3) Quelle est la perception des patients et des intervenants des modalités et retombées du REPI?; et 4) L’implantation du REPI est-elle associée à une diminution de la prévalence et de la durée des épisodes d’IC? Cette étude de cas instrumentale (Stake, 1995, 2008) était ancrée dans une approche participative. Le cas était celui de l’unité de soins psychiatriques aigus pour premier épisode psychotique où a été implanté le REPI. En premier lieu, le développement du REPI a d’abord fait l’objet d’une documentation du contexte par une immersion dans le milieu (n=56 heures) et des entretiens individuels avec un échantillonnage de convenance (n=3 patients, n=14 intervenants). Un comité d’experts (l’étudiante-chercheuse, six infirmières du milieu et un patient partenaire) a par la suite développé le REPI qui comporte deux volets : avec le patient et en équipe. L’évaluation des retombées a été effectuée par des entretiens individuels (n= 3 patients, n= 12 intervenants) et l’examen de la prévalence et de la durée des IC six mois avant et après l’implantation du REPI. Les données qualitatives ont été examinées selon une analyse thématique (Miles, Huberman, & Saldana, 2014), tandis que les données quantitatives ont fait l’objet de tests descriptifs et non-paramétriques. Les résultats proposent que le contexte d’implantation est défini par des normes implicites et explicites où l’utilisation de l’IC peut générer un cercle vicieux de comportements agressifs nourris par un profond sentiment d’injustice de la part des patients. Ceux-ci ont l’impression qu’ils doivent se conformer aux attentes du personnel et aux règles de l’unité. Les participants ont exprimé le besoin de créer des opportunités pour une communication authentique qui pourrait avoir lieu lors du REPI, bien que sa pratique soit variable d’un intervenant à un autre. Les résultats suggèrent que le principal élément ayant facilité l’implantation du REPI est l’approche participative de l’étude, alors que les obstacles rencontrés relèvent surtout de la complexité de la mise en œuvre du REPI en équipe. Lors du REPI avec le patient, les infirmières ont pu explorer ses sentiments et son point de vue, ce qui a favorisé la reconstruction de la relation thérapeutique. Quant au REPI avec l’équipe de soins, il a été perçu comme une opportunité d’apprentissage, ce qui a permis d’ajuster le plan d’intervention des patients. Suite à l’implantation du REPI, les résultats ont d’ailleurs montré une réduction significative de l’utilisation de l’isolement et du temps passé en isolement. Les résultats de cette thèse soulignent la possibilité d’outrepasser le malaise initial perçu tant par le patient que par l’infirmière en systématisant le REPI. De plus, cette étude met l’accent sur le besoin d’une présence authentique pour atteindre un partage significatif dans la relation thérapeutique, ce qui est la pierre d’assise de la pratique infirmière en santé mentale. Cette étude contribue aux connaissances sur la prévention des comportements agressifs en milieu psychiatrique en documentant le contexte dans lequel se situe l’IC, en proposant un REPI comportant deux volets de REPI et en explorant ses retombées. Nos résultats soutiennent le potentiel du développement d’une prévention tertiaire qui intègre à la fois la perspective des patients et des intervenants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES The shear bond strength of three glass ionomer cements (GIC) to enamel and dentine was evaluated. STUDY DESIGN Sound permanent human molars (n=12) were grinded perpendicular to their axial axes, exposing smooth, flat enamel and dentine surfaces. The teeth were embedded in resin and conditioned with polyacrylic acid (25%; 10s). Twenty four specimens of each GIC: Fuji IX (FJ-GC), Ketac Molar Easymix (KM-3M ESPE) and Maxxion (MX-FGM) were prepared according to the Atraumatic Restorative Treatment (ART) (12 enamel and 12 dentine), in a bonding area of 4.91 mm² and immersed in water (37°C, 24h). The shear bond strength was tested in a universal testing machine. Non-parametric statistical tests (Friedman and post-hoc Wilcoxon Signed Ranks) were carried out (p=0.05). RESULTS The mean (±sd) of shear bond strength (MPa), on enamel and dentine, were: KM (6.4±1.4 and 7.6±1.5), FJ (5.9±1.5 and 6.0±1.9) and MX (4.2±1.5 and 4.9±1.5), respectively. There was a statistically significant difference between the GICs in both groups: enamel (p=0.004) and dentine (p=0.002). The lowest shear bond value for enamel was with MX and the highest for dentine was KM (p<0.05). CONCLUSION It is concluded that KM has the best adhesion to both enamel and dentine, followed by FJ and MX.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In an environment of constant change, technological developments, market competition and more informed consumers, the search for a lasting relationship through the conquest of loyalty has become the objective of companies. However, several authors suggest that this loyalty can be affected by negative comments available on the internet. Therefore, this dissertation has as objective to examine if the complaints are available on the internet impact the loyalty to a brand of mobile phone. The research used as the basis the Expanded NCSB model suggest by Johnson et al. (2001), studying five prominent drives of loyalty: image/brand reputation, affective commitment, calculative commitment, perceived value and trust, beyond the satisfaction construct as moderator variable. The research method adopted was the experimental design which included 285 undergraduate students, with the trial which included 285 undergraduate students, with the field study of the mobile industry, specifically, the brands of cell phones. The research approach was quantitative and methods were descriptive statistics, factor analysis, cluster analysis, linear regression and non-parametric test of Wilcoxon for data analysis. Of the 16 hypothesis stemmed from the research model proposed, 12 were confirmed. The results showed that the complaint available on the internet, here represented by the available on the site Reclame Aqui, may impact consumer perceptions about brand loyalty, as well as its antecedents, being that these complaints can affect all the consumers, regardless of historical satisfaction with the brand. It also noted the positive relationship between the independent variables trust, image/brand reputation, perceived value, affective commitment and calculative commitment and the dependent variable - loyalty, even when considering the data obtained after exposure to the complaint. However, no unanimous conclusion that the relationship between these variables was strongest in the group with satisfactory experience. At the first moment of the research, the trust was the most important variable for the formation of loyalty. However, after exposure to treatment, the image/brand reputation, was more relevant. Contributions of the study, limitations and recommendations for future researches are approached in the present investigation