894 resultados para mean field analysis
Resumo:
The concept of plagiarism is not uncommonly associated with the concept of intellectual property, both for historical and legal reasons: the approach to the ownership of ‘moral’, nonmaterial goods has evolved to the right to individual property, and consequently a need was raised to establish a legal framework to cope with the infringement of those rights. The solution to plagiarism therefore falls most often under two categories: ethical and legal. On the ethical side, education and intercultural studies have addressed plagiarism critically, not only as a means to improve academic ethics policies (PlagiarismAdvice.org, 2008), but mainly to demonstrate that if anything the concept of plagiarism is far from being universal (Howard & Robillard, 2008). Even if differently, Howard (1995) and Scollon (1994, 1995) argued, and Angèlil-Carter (2000) and Pecorari (2008) later emphasised that the concept of plagiarism cannot be studied on the grounds that one definition is clearly understandable by everyone. Scollon (1994, 1995), for example, claimed that authorship attribution is particularly a problem in non-native writing in English, and so did Pecorari (2008) in her comprehensive analysis of academic plagiarism. If among higher education students plagiarism is often a problem of literacy, with prior, conflicting social discourses that may interfere with academic discourse, as Angèlil-Carter (2000) demonstrates, we then have to aver that a distinction should be made between intentional and inadvertent plagiarism: plagiarism should be prosecuted when intentional, but if it is part of the learning process and results from the plagiarist’s unfamiliarity with the text or topic it should be considered ‘positive plagiarism’ (Howard, 1995: 796) and hence not an offense. Determining the intention behind the instances of plagiarism therefore determines the nature of the disciplinary action adopted. Unfortunately, in order to demonstrate the intention to deceive and charge students with accusations of plagiarism, teachers necessarily have to position themselves as ‘plagiarism police’, although it has been argued otherwise (Robillard, 2008). Practice demonstrates that in their daily activities teachers will find themselves being required a command of investigative skills and tools that they most often lack. We thus claim that the ‘intention to deceive’ cannot inevitably be dissociated from plagiarism as a legal issue, even if Garner (2009) asserts that generally plagiarism is immoral but not illegal, and Goldstein (2003) makes the same severance. However, these claims, and the claim that only cases of copyright infringement tend to go to court, have recently been challenged, mainly by forensic linguists, who have been actively involved in cases of plagiarism. Turell (2008), for instance, demonstrated that plagiarism is often connoted with an illegal appropriation of ideas. Previously, she (Turell, 2004) had demonstrated by comparison of four translations of Shakespeare’s Julius Caesar to Spanish that the use of linguistic evidence is able to demonstrate instances of plagiarism. This challenge is also reinforced by practice in international organisations, such as the IEEE, to whom plagiarism potentially has ‘severe ethical and legal consequences’ (IEEE, 2006: 57). What plagiarism definitions used by publishers and organisations have in common – and which the academia usually lacks – is their focus on the legal nature. We speculate that this is due to the relation they intentionally establish with copyright laws, whereas in education the focus tends to shift from the legal to the ethical aspects. However, the number of plagiarism cases taken to court is very small, and jurisprudence is still being developed on the topic. In countries within the Civil Law tradition, Turell (2008) claims, (forensic) linguists are seldom called upon as expert witnesses in cases of plagiarism, either because plagiarists are rarely taken to court or because there is little tradition of accepting linguistic evidence. In spite of the investigative and evidential potential of forensic linguistics to demonstrate the plagiarist’s intention or otherwise, this potential is restricted by the ability to identify a text as being suspect of plagiarism. In an era with such a massive textual production, ‘policing’ plagiarism thus becomes an extraordinarily difficult task without the assistance of plagiarism detection systems. Although plagiarism detection has attracted the attention of computer engineers and software developers for years, a lot of research is still needed. Given the investigative nature of academic plagiarism, plagiarism detection has of necessity to consider not only concepts of education and computational linguistics, but also forensic linguistics. Especially, if intended to counter claims of being a ‘simplistic response’ (Robillard & Howard, 2008). In this paper, we use a corpus of essays written by university students who were accused of plagiarism, to demonstrate that a forensic linguistic analysis of improper paraphrasing in suspect texts has the potential to identify and provide evidence of intention. A linguistic analysis of the corpus texts shows that the plagiarist acts on the paradigmatic axis to replace relevant lexical items with a related word from the same semantic field, i.e. a synonym, a subordinate, a superordinate, etc. In other words, relevant lexical items were replaced with related, but not identical, ones. Additionally, the analysis demonstrates that the word order is often changed intentionally to disguise the borrowing. On the other hand, the linguistic analysis of linking and explanatory verbs (i.e. referencing verbs) and prepositions shows that these have the potential to discriminate instances of ‘patchwriting’ and instances of plagiarism. This research demonstrates that the referencing verbs are borrowed from the original in an attempt to construct the new text cohesively when the plagiarism is inadvertent, and that the plagiarist has made an effort to prevent the reader from identifying the text as plagiarism, when it is intentional. In some of these cases, the referencing elements prove being able to identify direct quotations and thus ‘betray’ and denounce plagiarism. Finally, we demonstrate that a forensic linguistic analysis of these verbs is critical to allow detection software to identify them as proper paraphrasing and not – mistakenly and simplistically – as plagiarism.
Resumo:
Objective: The purpose of this study was to examine the effectiveness of a new analysis method of mfVEP objective perimetry in the early detection of glaucomatous visual field defects compared to the gold standard technique. Methods and patients: Three groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes), and glaucoma suspect patients (38 eyes). All subjects underwent two standard 24-2 visual field tests: one with the Humphrey Field Analyzer and a single mfVEP test in one session. Analysis of the mfVEP results was carried out using the new analysis protocol: the hemifield sector analysis protocol. Results: Analysis of the mfVEP showed that the signal to noise ratio (SNR) difference between superior and inferior hemifields was statistically significant between the three groups (analysis of variance, P<0.001 with a 95% confidence interval, 2.82, 2.89 for normal group; 2.25, 2.29 for glaucoma suspect group; 1.67, 1.73 for glaucoma group). The difference between superior and inferior hemifield sectors and hemi-rings was statistically significant in 11/11 pair of sectors and hemi-rings in the glaucoma patients group (t-test P<0.001), statistically significant in 5/11 pairs of sectors and hemi-rings in the glaucoma suspect group (t-test P<0.01), and only 1/11 pair was statistically significant (t-test P<0.9). The sensitivity and specificity of the hemifield sector analysis protocol in detecting glaucoma was 97% and 86% respectively and 89% and 79% in glaucoma suspects. These results showed that the new analysis protocol was able to confirm existing visual field defects detected by standard perimetry, was able to differentiate between the three study groups with a clear distinction between normal patients and those with suspected glaucoma, and was able to detect early visual field changes not detected by standard perimetry. In addition, the distinction between normal and glaucoma patients was especially clear and significant using this analysis. Conclusion: The new hemifield sector analysis protocol used in mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patients. Using this protocol, it can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. The sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucomatous visual field loss. The intersector analysis protocol can detect early field changes not detected by the standard Humphrey Field Analyzer test. © 2013 Mousa et al, publisher and licensee Dove Medical Press Ltd.
Resumo:
Purpose: To determine whether curve-fitting analysis of the ranked segment distributions of topographic optic nerve head (ONH) parameters, derived using the Heidelberg Retina Tomograph (HRT), provide a more effective statistical descriptor to differentiate the normal from the glaucomatous ONH. Methods: The sample comprised of 22 normal control subjects (mean age 66.9 years; S.D. 7.8) and 22 glaucoma patients (mean age 72.1 years; S.D. 6.9) confirmed by reproducible visual field defects on the Humphrey Field Analyser. Three 10°-images of the ONH were obtained using the HRT. The mean topography image was determined and the HRT software was used to calculate the rim volume, rim area to disc area ratio, normalised rim area to disc area ratio and retinal nerve fibre cross-sectional area for each patient at 10°-sectoral intervals. The values were ranked in descending order, and each ranked-segment curve of ordered values was fitted using the least squares method. Results: There was no difference in disc area between the groups. The group mean cup-disc area ratio was significantly lower in the normal group (0.204 ± 0.16) compared with the glaucoma group (0.533 ± 0.083) (p < 0.001). The visual field indices, mean deviation and corrected pattern S.D., were significantly greater (p < 0.001) in the glaucoma group (-9.09 dB ± 3.3 and 7.91 ± 3.4, respectively) compared with the normal group (-0.15 dB ± 0.9 and 0.95 dB ± 0.8, respectively). Univariate linear regression provided the best overall fit to the ranked segment data. The equation parameters of the regression line manually applied to the normalised rim area-disc area and the rim area-disc area ratio data, correctly classified 100% of normal subjects and glaucoma patients. In this study sample, the regression analysis of ranked segment parameters method was more effective than conventional ranked segment analysis, in which glaucoma patients were misclassified in approximately 50% of cases. Further investigation in larger samples will enable the calculation of confidence intervals for normality. These reference standards will then need to be investigated for an independent sample to fully validate the technique. Conclusions: Using a curve-fitting approach to fit ranked segment curves retains information relating to the topographic nature of neural loss. Such methodology appears to overcome some of the deficiencies of conventional ranked segment analysis, and subject to validation in larger scale studies, may potentially be of clinical utility for detecting and monitoring glaucomatous damage. © 2007 The College of Optometrists.
Resumo:
A vision system is applied to full-field displacements and deformation measurements in solid mechanics. A speckle like pattern is preliminary formed on the surface under investigation. To determine displacements field of one speckle image with respect to a reference speckle image, sub-images, referred to Zones Of Interest (ZOI) are considered. The field is obtained by matching a ZOI in the reference image with the respective ZOI in the moved image. Two image processing techniques are used for implementing the matching procedure: – cross correlation function and minimum mean square error (MMSE) of the ZOI intensity distribution. The two algorithms are compared and the influence of the ZOI size on the accuracy of measurements is studied.
Resumo:
We develop, implement and study a new Bayesian spatial mixture model (BSMM). The proposed BSMM allows for spatial structure in the binary activation indicators through a latent thresholded Gaussian Markov random field. We develop a Gibbs (MCMC) sampler to perform posterior inference on the model parameters, which then allows us to assess the posterior probabilities of activation for each voxel. One purpose of this article is to compare the HJ model and the BSMM in terms of receiver operating characteristics (ROC) curves. Also we consider the accuracy of the spatial mixture model and the BSMM for estimation of the size of the activation region in terms of bias, variance and mean squared error. We perform a simulation study to examine the aforementioned characteristics under a variety of configurations of spatial mixture model and BSMM both as the size of the region changes and as the magnitude of activation changes.
Resumo:
Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard SAP visual field assessment, and others were not very informative and needed more adjustment and research work. In this study we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. The purpose of this study is to examine the benefit of adding mfVEP hemifield Intersector analysis protocol to the standard HFA test when there is suspicious glaucomatous visual field loss. 3 groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey visual field HFA test 24-2, optical coherence tomography of the optic nerve head, and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the Hemifield Sector Analysis HSA protocol. The retinal nerve fibre (RNFL) thickness was recorded to identify subjects with suspicious RNFL loss. The hemifield Intersector analysis of mfVEP results showed that signal to noise ratio (SNR) difference between superior and inferior hemifields was statistically significant between the 3 groups (ANOVA p<0.001 with a 95% CI). The difference between superior and inferior hemispheres in all subjects were all statistically significant in the glaucoma patient group 11/11 sectors (t-test p<0.001), partially significant 5/11 in glaucoma suspect group (t-test p<0.01) and no statistical difference between most sectors in normal group (only 1/11 was significant) (t-test p<0.9). Sensitivity and specificity of the HSA protocol in detecting glaucoma was 97% and 86% respectively, while for glaucoma suspect were 89% and 79%. The use of SAP and mfVEP results in subjects with suspicious glaucomatous visual field defects, identified by low RNFL thickness, is beneficial in confirming early visual field defects. The new HSA protocol used in the mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patient. Using this protocol in addition to SAP analysis can provide information about focal visual field differences across the horizontal midline, and confirm suspicious field defects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss. The Intersector analysis protocol can detect early field changes not detected by standard HFA test.
Resumo:
This study was an evaluation of a Field Project Model Curriculum and its impact on achievement, attitude toward science, attitude toward the environment, self-concept, and academic self-concept with at-risk eleventh and twelfth grade students. One hundred eight students were pretested and posttested on the Piers-Harris Children's Self-Concept Scale, PHCSC (1985); the Self-Concept as a Learner Scale, SCAL (1978); the Marine Science Test, MST (1987); the Science Attitude Inventory, SAI (1970); and the Environmental Attitude Scale, EAS (1972). Using a stratified random design, three groups of students were randomly assigned according to sex and stanine level, to three treatment groups. Group one received the field project method, group two received the field study method, and group three received the field trip method. All three groups followed the marine biology course content as specified by Florida Student Performance Objectives and Frameworks. The intervention occurred for ten months with each group participating in outside-of-classroom activities on a trimonthly basis. Analysis of covariance procedures were used to determine treatment effects. F-ratios, p-levels and t-tests at p $<$.0062 (.05/8) indicated that a significant difference existed among the three treatment groups. Findings indicated that groups one and two were significantly different from group three with group one displaying significantly higher results than group two. There were no significant differences between males and females in performance on the five dependent variables. The tenets underlying environmental education are congruent with the recommendations toward the reform of science education. These include a value analysis approach, inquiry methods, and critical thinking strategies that are applied to environmental issues. ^
Resumo:
The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^
Resumo:
Gate-tunable two-dimensional (2D) materials-based quantum capacitors (QCs) and van der Waals heterostructures involve tuning transport or optoelectronic characteristics by the field effect. Recent studies have attributed the observed gate-tunable characteristics to the change of the Fermi level in the first 2D layer adjacent to the dielectrics, whereas the penetration of the field effect through the one-molecule-thick material is often ignored or oversimplified. Here, we present a multiscale theoretical approach that combines first-principles electronic structure calculations and the Poisson–Boltzmann equation methods to model penetration of the field effect through graphene in a metal–oxide–graphene–semiconductor (MOGS) QC, including quantifying the degree of “transparency” for graphene two-dimensional electron gas (2DEG) to an electric displacement field. We find that the space charge density in the semiconductor layer can be modulated by gating in a nonlinear manner, forming an accumulation or inversion layer at the semiconductor/graphene interface. The degree of transparency is determined by the combined effect of graphene quantum capacitance and the semiconductor capacitance, which allows us to predict the ranking for a variety of monolayer 2D materials according to their transparency to an electric displacement field as follows: graphene > silicene > germanene > WS2 > WTe2 > WSe2 > MoS2 > phosphorene > MoSe2 > MoTe2, when the majority carrier is electron. Our findings reveal a general picture of operation modes and design rules for the 2D-materials-based QCs.
Resumo:
Tsunamis occur quite frequently following large magnitude earthquakes along the Chilean coast. Most of these earthquakes occur along the Peru-Chile Trench, one of the most seismically active subduction zones of the world. This study aims to understand better the characteristics of the tsunamis triggered along the Peru-Chile Trench. We investigate the tsunamis induced by the Mw8.3 Illapel, the Mw8.2 Iquique and the Mw8.8 Maule Chilean earthquakes that happened on September 16th, 2015, April 1st, 2014 and February 27th, 2010, respectively. The study involves the relation between the co-seismic deformation and the tsunami generation, the near-field tsunami propagation, and the spectral analysis of the recorded tsunami signals in the near-field. We compare the tsunami characteristics to highlight the possible similarities between the three events and, therefore, attempt to distinguish the specific characteristics of the tsunamis occurring along the Peru-Chile Trench. We find that these three earthquakes present faults with important extensions beneath the continent which result in the generation of tsunamis with short wavelengths, relative to the fault widths involved, and with reduced initial potential energy. In addition, the presence of the Chilean continental margin, that includes the shelf of shallow bathymetry and the continental slope, constrains the tsunami propagation and the coastal impact. All these factors contribute to a concentrated local impact but can, on the other hand, reduce the far-field tsunami effects from earthquakes along Peru-Chile Trench.
Resumo:
This thesis develops and tests various transient and steady-state computational models such as direct numerical simulation (DNS), large eddy simulation (LES), filtered unsteady Reynolds-averaged Navier-Stokes (URANS) and steady Reynolds-averaged Navier-Stokes (RANS) with and without magnetic field to investigate turbulent flows in canonical as well as in the nozzle and mold geometries of the continuous casting process. The direct numerical simulations are first performed in channel, square and 2:1 aspect rectangular ducts to investigate the effect of magnetic field on turbulent flows. The rectangular duct is a more practical geometry for continuous casting nozzle and mold and has the option of applying magnetic field either perpendicular to broader side or shorter side. This work forms the part of a graphic processing unit (GPU) based CFD code (CU-FLOW) development for magnetohydrodynamic (MHD) turbulent flows. The DNS results revealed interesting effects of the magnetic field and its orientation on primary, secondary flows (instantaneous and mean), Reynolds stresses, turbulent kinetic energy (TKE) budgets, momentum budgets and frictional losses, besides providing DNS database for two-wall bounded square and rectangular duct MHD turbulent flows. Further, the low- and high-Reynolds number RANS models (k-ε and Reynolds stress models) are developed and tested with DNS databases for channel and square duct flows with and without magnetic field. The MHD sink terms in k- and ε-equations are implemented as proposed by Kenjereš and Hanjalić using a user defined function (UDF) in FLUENT. This work revealed varying accuracies of different RANS models at different levels. This work is useful for industry to understand the accuracies of these models, including continuous casting. After realizing the accuracy and computational cost of RANS models, the steady-state k-ε model is then combined with the particle image velocimetry (PIV) and impeller probe velocity measurements in a 1/3rd scale water model to study the flow quality coming out of the well- and mountain-bottom nozzles and the effect of stopper-rod misalignment on fluid flow. The mountain-bottom nozzle was found more prone to the longtime asymmetries and higher surface velocities. The left misalignment of stopper gave higher surface velocity on the right leading to significantly large number of vortices forming behind the nozzle on the left. Later, the transient and steady-state models such as LES, filtered URANS and steady RANS models are combined with ultrasonic Doppler velocimetry (UDV) measurements in a GaInSn model of typical continuous casting process. LES-CU-LOW is the fastest and the most accurate model owing to much finer mesh and a smaller timestep. This work provided a good understanding on the performance of these models. The behavior of instantaneous flows, Reynolds stresses and proper orthogonal decomposition (POD) analysis quantified the nozzle bottom swirl and its importance on the turbulent flow in the mold. Afterwards, the aforementioned work in GaInSn model is extended with electromagnetic braking (EMBr) to help optimize a ruler-type brake and its location for the continuous casting process. The magnetic field suppressed turbulence and promoted vortical structures with their axis aligned with the magnetic field suggesting tendency towards 2-d turbulence. The stronger magnetic field at the nozzle well and around the jet region created large scale and lower frequency flow behavior by suppressing nozzle bottom swirl and its front-back alternation. Based on this work, it is advised to avoid stronger magnetic field around jet and nozzle bottom to get more stable and less defect prone flow.
Resumo:
Since turning professional in 1995 there have been considerable advances in the research on the demands of rugby union, largely using Global Positioning System (GPS) analysis over the last 10 years. A systematic review on the use of GPS, particularly the setting of absolute (ABS) and individual (IND) velocity bands in field based, intermittent, high-intensity (HI) team sports was undertaken. From 3669 records identified, 38 studies were included for qualitative analysis. Little agreement on the definition of movement intensities within team sports was found, only three papers, all on rugby union, had used IND bands, with only one comparing ABS and IND methods. Thus, the aim of this study was to determine if there is a difference in the demands within positions when comparing ABS and IND methods for GPS analysis and if these differences are significantly different between the forward and back positional groups. A total of 214 data files were recorded from 26 players in 17 matches of the 2015/2016 Scottish BT Premiership. ABS velocity zones 1-7 were set at 1) 0-6, 2) 6.1-11, 3) 11.1-15, 4) 15.1-18, 5) 18.1-21, 6) 21.1-15 and 7) 25.1-40km.h-1 while IND zones 1-7 were 1) <20, 2) 20-40, 3) 40-50, 4) 50-70, 5) 70-80, 6) 80-95 and 7) 95-100% of player’s individually determined maximum velocity (Vmax). A 40m sprint test measured Vmax using OptaPro S4 10 Hz (catapult, Australia) GPS units to derive IND bands. The same GPS units were worn during matches. GPS outputs analysed were % distance, % time, high intensity efforts (HIEs) over 18.1 km.h-1 / 70% max velocity and repeated high intensity efforts (RHIEs) which consists of three HIEs in 21secs. General linear model (GLM) analysis identified a significant difference in the measurement of % total distance covered, between the ABS and IND methods in all zones for forwards (p<0.05) and backs (p<0.05). This difference was also significant between forwards and backs in zones 1, shown as mean difference ± standard deviation (3.7±0.7%), 6 (1.2±0.4%) and 7 (1.0±0.0%) respectively (p<0.05). Percentage time estimations were significantly different between ABS and IND analysis within forwards in zones 1 (1.7±1.7%), 2 (-2.9±1.3%), 3 (1.9±0.8%), 4 (-1.4±0.8%) and 5 (0.2±0.4%), and within backs in zones 1 (-10±1.5%), 2 (-1.2±1.1%), 3 (1.8±0.9%) and 5 (0.6±0.5%) (p<0.05). The difference between groups was significant in zones 1, 2, 4 and 5 (p<0.05). The number of HIEs was significantly different between forwards and backs in zones 6 (6±2) and 7 (3±2). RHIEs were significantly different between ABS and IND for forwards (1±2, p<0.05) although not between groups. Until more research on the differences in ABS and IND methods is carried out, then neither can be deemed a criterion method. In conclusion, there are significant differences between the ABS and IND methods of GPS analysis of the physical demands of rugby union, which must be considered when used to inform training load and recovery to improve performance and reduce injuries.
Resumo:
Common accounts on socialization are predominantly slanted towards cognitive conceptions. When emotions are considered, most of the time emphasis lays upon negative emotions. Against this background, this study refines prior research in two ways. First, we offer an emotion-oriented perspective of socialization processes. Second, we concentrate on the socialization of positive emotions. We confirm these assumptions by means of an explorative case study in the field of consulting firms. Results suggest that positive emotions play a crucial role throughout the different socialization phases, and can manifest themselves over time in a virtuous cycle. In addition, conventional notions on socialization agents are refined by this research, while arguing that clients ought to be taken similarly into consideration. The article concludes by offering managerial implications, as well as suggestions for future research activities with regard to the socialization of positive emotions.
Resumo:
We apply wide-field interferometric microscopy techniques to acquire quantitative phase profiles of ventricular cardiomyocytes in vitro during their rapid contraction with high temporal and spatial resolution. The whole-cell phase profiles are analyzed to yield valuable quantitative parameters characterizing the cell dynamics, without the need to decouple thickness from refractive index differences. Our experimental results verify that these new parameters can be used with wide field interferometric microscopy to discriminate the modulation of cardiomyocyte contraction dynamics due to temperature variation. To demonstrate the necessity of the proposed numerical analysis for cardiomyocytes, we present confocal dual-fluorescence-channel microscopy results which show that the rapid motion of the cell organelles during contraction preclude assuming a homogenous refractive index over the entire cell contents, or using multiple-exposure or scanning microscopy.