918 resultados para Trial and error


Relevância:

90.00% 90.00%

Publicador:

Resumo:

It has been demonstrated that exposure to a variety of stressful experiences enhances fearful reactions when behavior is tested in current animal models of anxiety. Until now, no study has examined the neurochemical changes during the test and retest sessions of rats submitted to the elevated plus maze (EPM). The present study uses a new approach (HPLC) by looking at the changes in dopamine and serotonin levels in the prefrontal cortex, amygdala, dorsal hippocampus, and nucleus accumbens in animals upon single or double exposure to the EPM (one-trial tolerance). The study involved two experiments: i) saline or midazolam (0.5 mg/kg) before the first trial, and ii) saline or midazolam before the second trial. For the biochemical analysis a control group injected with saline and not tested in the EPM was included. Stressful stimuli in the EPM were able to elicit one-trial tolerance to midazolam on re-exposure (61.01%). Significant decreases in serotonin contents occurred in the prefrontal cortex (38.74%), amygdala (78.96%), dorsal hippocampus (70.33%), and nucleus accumbens (73.58%) of the animals tested in the EPM (P < 0.05 in all cases in relation to controls not exposed to the EPM). A significant decrease in dopamine content was also observed in the amygdala (54.74%, P < 0.05). These changes were maintained across trials. There was no change in the turnover rates of these monoamines. We suggest that exposure to the EPM causes reduced monoaminergic neurotransmission activity in limbic structures, which appears to underlie the "one-trial tolerance" phenomenon.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This dissertation examines parental disciplinary violence against children in authority records and in the criminal procedure in Finland. The main aim is to analyze disciplinary violence, how it is defined, and how it is constructed as a crime by social workers, the police, and parents. This dissertation consists of four sub-studies and a summary article. In the first sub-study, I examine how disciplinary violence appears in child welfare documents and analyze the decision-making processes and measures taken by the child welfare workers. The second sub-study, utilizing police interview data, examines police officers’ perceptions of disciplinary violence, its criminalization, and its investigation. In addition to this analysis of police officers’ own perceptions, in the third sub-study, I use reports of crime and pre-trial investigation documents to look at what a typical suspicion of disciplinary violence coming to the attention of the police is and examine the decision-making processes of the police. Utilizing authority data, the fourth sub-study analyzes how parents rationalize the use of disciplinary violence to the authorities investigating these suspicions. The research provides findings that are unprecedented in Finland. Firstly, it was shown that social workers’ decision-making processes in suspicions of disciplinary violence follow three pathways of reasoning, with many factors taken into consideration; and in less than one-third of the cases, a request for criminal investigation has been made to the police. Secondly, it was verified that police officers hold different perceptions of disciplinary violence, and these perceptions have multiple effects on the investigation of these cases and the construction of disciplinary violence as a crime. Thirdly, the analysis of the reports of crime and pre-trial investigation documents showed that almost two-thirds of the cases of disciplinary violence had been sent to a prosecutor by the police and, thus, defined as a crime. However, in many cases, acts of disciplinary violence were often seen as ‘educational, petty one-off incidents’ and a possible trial and punishment for the perpetrator were seen as unreasonable. Fourthly, it was found that parents often try to neutralize and rationalize the violence they have used against their children, for example, either by denying the victim, the criminal intent, or the entire act, or relying on the necessity of the forbidden act. The dissertation concludes that disciplinary violence is defined and constructed in authority policies and practices, first and foremost, by the severity of the act, the nature of the act as continuous or singular, the perceived harm caused by the act to a child, and the perceptions of authorities regarding physical punishment of children. The asymmetrical power setting present in disciplinary violence and parents’ legitimized right to raise and discipline their children partly seem to explain why criminal-law processing of these suspicions of violence and understanding these as crimes is difficult. Finally, this research calls for more coherent and consistent authority practices and policies, achieved by educating authorities and increasing awareness on disciplinary violence, questions the need for a concept like ‘disciplinary’ violence, and suggests more emphasis on unambiguous perceptions of a child’s best interest.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Whereas the role of the anterior cingulate cortex (ACC) in cognitive control has received considerable attention, much less work has been done on the role of the ACC in autonomic regulation. Its connections through the vagus nerve to the sinoatrial node of the heart are thought to exert modulatory control over cardiovascular arousal. Therefore, ACC is not only responsible for the implementation of cognitive control, but also for the dynamic regulation of cardiovascular activity that characterizes healthy heart rate and adaptive behaviour. However, cognitive control and autonomic regulation are rarely examined together. Moreover, those studies that have examined the role of phasic vagal cardiac control in conjunction with cognitive performance have produced mixed results, finding relations for specific age groups and types of tasks but not consistently. So, while autonomic regulatory control appears to support effective cognitive performance under some conditions, it is not presently clear just what factors contribute to these relations. The goal of the present study was, therefore, to examine the relations between autonomic arousal, neural responsivity, and cognitive performance in the context of a task that required ACC support. Participants completed a primary inhibitory control task with a working memory load embedded. Pre-test cardiovascular measures were obtained, and ontask ERPs associated with response control (N2/P3) and error-related processes (ERN/Pe) were analyzed. Results indicated that response inhibition was unrelated to phasic vagal cardiac control, as indexed by respiratory sinus arrhythmia (RSA). However, higher resting RSA was associated with larger ERN ampUtude for the highest working memory load condition. This finding suggests that those individuals with greater autonomic regulatory control exhibited more robust ACC error-related responses on the most challenging task condition. On the other hand, exploratory analyses with rate pressure product (RPP), a measure of sympathetic arousal, indicated that higher pre-test RPP (i.e., more sympathetic influence) was associated with more errors on "catch" NoGo trials, i.e., NoGo trials that simultaneously followed other NoGo trials, and consequently, reqviired enhanced response control. Higher pre-test RPP was also associated with smaller amplitude ERNs for all three working memory loads and smaller ampUtude P3s for the low and medium working memory load conditions. Thus, higher pretest sympathetic arousal was associated with poorer performance on more demanding "catch" NoGo trials and less robust ACC-related electrocortical responses. The findings firom the present study highlight tiie interdependence of electrocortical and cardiovascular processes. While higher pre-test parasympathetic control seemed to relate to more robust ACC error-related responses, higher pre-test sympathetic arousal resulted in poorer inhibitory control performance and smaller ACC-generated electrocortical responses. Furthermore, these results provide a base from which to explore the relation between ACC and neuro/cardiac responses in older adults who may display greater variance due to the vulnerabihty of these systems to the normal aging process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Imaging studies have shown reduced frontal lobe resources following total sleep deprivation (TSD). The anterior cingulate cortex (ACC) in the frontal region plays a role in performance monitoring and cognitive control; both error detection and response inhibition are impaired following sleep loss. Event-related potentials (ERPs) are an electrophysiological tool used to index the brain's response to stimuli and information processing. In the Flanker task, the error-related negativity (ERN) and error positivity (Pe) ERPs are elicited after erroneous button presses. In a Go/NoGo task, NoGo-N2 and NoGo-P3 ERPs are elicited during high conflict stimulus processing. Research investigating the impact of sleep loss on ERPs during performance monitoring is equivocal, possibly due to task differences, sample size differences and varying degrees of sleep loss. Based on the effects of sleep loss on frontal function and prior research, it was expected that the sleep deprivation group would have lower accuracy, slower reaction time and impaired remediation on performance monitoring tasks, along with attenuated and delayed stimulus- and response-locked ERPs. In the current study, 49 young adults (24 male) were screened to be healthy good sleepers and then randomly assigned to a sleep deprived (n = 24) or rested control (n = 25) group. Participants slept in the laboratory on a baseline night, followed by a second night of sleep or wake. Flanker and Go/NoGo tasks were administered in a battery at 1O:30am (i.e., 27 hours awake for the sleep deprivation group) to measure performance monitoring. On the Flanker task, the sleep deprivation group was significantly slower than controls (p's <.05), but groups did not differ on accuracy. No group differences were observed in post-error slowing, but a trend was observed for less remedial accuracy in the sleep deprived group compared to controls (p = .09), suggesting impairment in the ability to take remedial action following TSD. Delayed P300s were observed in the sleep deprived group on congruent and incongruent Flanker trials combined (p = .001). On the Go/NoGo task, the hit rate (i.e., Go accuracy) was significantly lower in the sleep deprived group compared to controls (p <.001), but no differences were found on false alarm rates (i.e., NoGo Accuracy). For the sleep deprived group, the Go-P3 was significantly smaller (p = .045) and there was a trend for a smaller NoGo-N2 compared to controls (p = .08). The ERN amplitude was reduced in the TSD group compared to controls in both the Flanker and Go/NoGo tasks. Error rate was significantly correlated with the amplitude of response-locked ERNs in control (r = -.55, p=.005) and sleep deprived groups (r = -.46, p = .021); error rate was also correlated with Pe amplitude in controls (r = .46, p=.022) and a trend was found in the sleep deprived participants (r = .39, p =. 052). An exploratory analysis showed significantly larger Pe mean amplitudes (p = .025) in the sleep deprived group compared to controls for participants who made more than 40+ errors on the Flanker task. Altered stimulus processing as indexed by delayed P3 latency during the Flanker task and smaller amplitude Go-P3s during the Go/NoGo task indicate impairment in stimulus evaluation and / or context updating during frontal lobe tasks. ERN and NoGoN2 reductions in the sleep deprived group confirm impairments in the monitoring system. These data add to a body of evidence showing that the frontal brain region is particularly vulnerable to sleep loss. Understanding the neural basis of these deficits in performance monitoring abilities is particularly important for our increasingly sleep deprived society and for safety and productivity in situations like driving and sustained operations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A double-blinded, placebo controlled, cross-over design was used to investigate sodium citrate dihydrate (Na-CIT) supplementation improve 200m swimming performance. Ten well-trained, male swimmers (14.9 ± 0.4y; 63.5 ± 4kg) performed four 200m time trials: acute (ACU) supplementation (0.5g/kg), acute placebo (PLC-A), chronic (CHR) (0.1g/kg for 3 days and 0.3g/kg on the 4th day pre-trial), and chronic placebo (PLC-C). Na-CIT was administered 120min pre-trial in solution with 500mL of flavored water; placebo was flavored water. Blood lactate, base excess (BE), bicarbonate, pH, and PCO2 were analyzed at basal, 100min post-ingestion, and 3min post-trial via finger prick. Time, lactate, and rate of perceived exertion were not different between trials. BE and bicarbonate were significantly higher for the ACU and CHR trials compared to placebo. “Responders” improved by 1.03% (P=0.043) and attained significantly higher post-trial lactate concentrations in the ACU versus PLC-A trials and compared to non-responders in the ACU and CHR trials.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis tested a model of neurovisceral integration (Thayer & Lane, 2001) wherein parasympathetic autonomic regulation is considered to play a central role in cognitive control. We asked whether respiratory sinus arrhythmia (RSA), a parasympathetic index, and cardiac workload (rate pressure product, RPP) would influence cognition and whether this would change with age. Cognitive control was measured behaviourally and electrophysiologically through the error-related negativity (ERN) and error positivity (Pe). The ERN and Pe are thought to be generated by the anterior cingulate cortex (ACC), a region involved in regulating cognitive and autonomic control and susceptible to age-related change. In Study 1, older and younger adults completed a working memory Go/NoGo task. Although RSA did not relate to performance, higher pre-task RPP was associated with poorer NoGo performance among older adults. Relations between ERN/Pe and accuracy were indirect and more evident in younger adults. Thus, Study 1 supported the link between cognition and autonomic activity, specifically, cardiac workload in older adults. In Study 2, we included younger adults and manipulated a Stroop task to clarify conditions under which associations between RSA and performance will likely emerge. We varied task parameters to allow for proactive versus reactive strategies, and motivation was increased via financial incentive. Pre-task RSA predicted accuracy when response contingencies required maintenance of a specific item in memory. Thus, RSA was most relevant when performance required proactive control, a metabolically costly strategy that would presumably be more reliant on autonomic flexibility. In Study 3, we included older adults and examined RSA and proactive control in an additive factors framework. We maintained the incentive and measured fitness. Higher pre-task RSA among older adults was associated with greater accuracy when proactive control was needed most. Conversely, performance of young women was consistently associated with fitness. Relations between ERN/Pe and accuracy were modest; however, isolating ACC activity via independent component analysis allowed for more associations with accuracy to emerge in younger adults. Thus, performance in both groups appeared to be differentially dependent on RSA and ACC activation. Altogether, these data are consistent with a neurovisceral integration model in the context of cognitive control.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Une variété de modèles sur le processus de prise de décision dans divers contextes présume que les sujets accumulent les évidences sensorielles, échantillonnent et intègrent constamment les signaux pour et contre des hypothèses alternatives. L'intégration continue jusqu'à ce que les évidences en faveur de l'une des hypothèses dépassent un seuil de critère de décision (niveau de preuve exigé pour prendre une décision). De nouveaux modèles suggèrent que ce processus de décision est plutôt dynamique; les différents paramètres peuvent varier entre les essais et même pendant l’essai plutôt que d’être un processus statique avec des paramètres qui ne changent qu’entre les blocs d’essais. Ce projet de doctorat a pour but de démontrer que les décisions concernant les mouvements d’atteinte impliquent un mécanisme d’accumulation temporelle des informations sensorielles menant à un seuil de décision. Pour ce faire, nous avons élaboré un paradigme de prise de décision basée sur un stimulus ambigu afin de voir si les neurones du cortex moteur primaire (M1), prémoteur dorsal (PMd) et préfrontal (DLPFc) démontrent des corrélats neuronaux de ce processus d’accumulation temporelle. Nous avons tout d’abord testé différentes versions de la tâche avec l’aide de sujets humains afin de développer une tâche où l’on observe le comportement idéal des sujets pour nous permettre de vérifier l’hypothèse de travail. Les données comportementales chez l’humain et les singes des temps de réaction et du pourcentage d'erreurs montrent une augmentation systématique avec l'augmentation de l'ambigüité du stimulus. Ces résultats sont cohérents avec les prédictions des modèles de diffusion, tel que confirmé par une modélisation computationnelle des données. Nous avons, par la suite, enregistré des cellules dans M1, PMd et DLPFc de 2 singes pendant qu'ils s'exécutaient à la tâche. Les neurones de M1 ne semblent pas être influencés par l'ambiguïté des stimuli mais déchargent plutôt en corrélation avec le mouvement exécuté. Les neurones du PMd codent la direction du mouvement choisi par les singes, assez rapidement après la présentation du stimulus. De plus, l’activation de plusieurs cellules du PMd est plus lente lorsque l'ambiguïté du stimulus augmente et prend plus de temps à signaler la direction de mouvement. L’activité des neurones du PMd reflète le choix de l’animal, peu importe si c’est une bonne réponse ou une erreur. Ceci supporte un rôle du PMd dans la prise de décision concernant les mouvements d’atteinte. Finalement, nous avons débuté des enregistrements dans le cortex préfrontal et les résultats présentés sont préliminaires. Les neurones du DLPFc semblent beaucoup plus influencés par les combinaisons des facteurs de couleur et de position spatiale que les neurones du PMd. Notre conclusion est que le cortex PMd est impliqué dans l'évaluation des évidences pour ou contre la position spatiale de différentes cibles potentielles mais assez indépendamment de la couleur de celles-ci. Le cortex DLPFc serait plutôt responsable du traitement des informations pour la combinaison de la couleur et de la position des cibles spatiales et du stimulus ambigu nécessaire pour faire le lien entre le stimulus ambigu et la cible correspondante.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Bien que le passage du temps altère le cerveau, la cognition ne suit pas nécessairement le même destin. En effet, il existe des mécanismes compensatoires qui permettent de préserver la cognition (réserve cognitive) malgré le vieillissement. Les personnes âgées peuvent utiliser de nouveaux circuits neuronaux (compensation neuronale) ou des circuits existants moins susceptibles aux effets du vieillissement (réserve neuronale) pour maintenir un haut niveau de performance cognitive. Toutefois, la façon dont ces mécanismes affectent l’activité corticale et striatale lors de tâches impliquant des changements de règles (set-shifting) et durant le traitement sémantique et phonologique n’a pas été extensivement explorée. Le but de cette thèse est d’explorer comment le vieillissement affecte les patrons d’activité cérébrale dans les processus exécutifs d’une part et dans l’utilisation de règles lexicales d’autre part. Pour cela nous avons utilisé l’imagerie par résonance magnétique fonctionnelle (IRMf) lors de la performance d’une tâche lexicale analogue à celle du Wisconsin. Cette tâche a été fortement liée à de l’activité fronto-stritale lors des changements de règles, ainsi qu’à la mobilisation de régions associées au traitement sémantique et phonologique lors de décisions sémantiques et phonologiques, respectivement. Par conséquent, nous avons comparé l’activité cérébrale de jeunes individus (18 à 35 ans) à celle d’individus âgés (55 à 75 ans) lors de l’exécution de cette tâche. Les deux groupes ont montré l’implication de boucles fronto-striatales associées à la planification et à l’exécution de changements de règle. Toutefois, alors que les jeunes semblaient activer une « boucle cognitive » (cortex préfrontal ventrolatéral, noyau caudé et thalamus) lorsqu’ils se voyaient indiquer qu’un changement de règle était requis, et une « boucle motrice » (cortex postérieur préfrontal et putamen) lorsqu’ils devaient effectuer le changement, les participants âgés montraient une activation des deux boucles lors de l’exécution des changements de règle seulement. Les jeunes adultes tendaient à présenter une augmentation de l’activité du cortex préfrontal ventrolatéral, du gyrus fusiforme, du lobe ventral temporale et du noyau caudé lors des décisions sémantiques, ainsi que de l’activité au niveau de l’aire de Broca postérieur, de la junction temporopariétale et du cortex moteur lors de décisions phonologiques. Les participants âgés ont montré de l’activité au niveau du cortex préfrontal latéral et moteur durant les deux types de décisions lexicales. De plus, lorsque les décisions sémantiques et phonologiques ont été comparées entre elles, les jeunes ont montré des différences significatives au niveau de plusieurs régions cérébrales, mais pas les âgés. En conclusion, notre première étude a montré, lors du set-shifting, un délai de l’activité cérébrale chez les personnes âgées. Cela nous a permis de conceptualiser l’Hypothèse Temporelle de Compensation (troisième manuscrit) qui consiste en l’existence d’un mécanisme compensatoire caractérisé par un délai d’activité cérébrale lié au vieillissement permettant de préserver la cognition au détriment de la vitesse d’exécution. En ce qui concerne les processus langagiers (deuxième étude), les circuits sémantiques et phonologiques semblent se fusionner dans un seul circuit chez les individus âgés, cela représente vraisemblablement des mécanismes de réserve et de compensation neuronales qui permettent de préserver les habilités langagières.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Measurement is the act or the result of a quantitative comparison between a given quantity and a quantity of the same kind chosen as a unit. It is generally agreed that all measurements contain errors. In a measuring system where both a measuring instrument and a human being taking the measurement using a preset process, the measurement error could be due to the instrument, the process or the human being involved. The first part of the study is devoted to understanding the human errors in measurement. For that, selected person related and selected work related factors that could affect measurement errors have been identified. Though these are well known, the exact extent of the error and the extent of effect of different factors on human errors in measurement are less reported. Characterization of human errors in measurement is done by conducting an experimental study using different subjects, where the factors were changed one at a time and the measurements made by them recorded. From the pre‐experiment survey research studies, it is observed that the respondents could not give the correct answers to questions related to the correct values [extent] of human related measurement errors. This confirmed the fears expressed regarding lack of knowledge about the extent of human related measurement errors among professionals associated with quality. But in postexperiment phase of survey study, it is observed that the answers regarding the extent of human related measurement errors has improved significantly since the answer choices were provided based on the experimental study. It is hoped that this work will help users of measurement in practice to better understand and manage the phenomena of human related errors in measurement.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Any automatically measurable, robust and distinctive physical characteristic or personal trait that can be used to identify an individual or verify the claimed identity of an individual, referred to as biometrics, has gained significant interest in the wake of heightened concerns about security and rapid advancements in networking, communication and mobility. Multimodal biometrics is expected to be ultra-secure and reliable, due to the presence of multiple and independent—verification clues. In this study, a multimodal biometric system utilising audio and facial signatures has been implemented and error analysis has been carried out. A total of one thousand face images and 250 sound tracks of 50 users are used for training the proposed system. To account for the attempts of the unregistered signatures data of 25 new users are tested. The short term spectral features were extracted from the sound data and Vector Quantization was done using K-means algorithm. Face images are identified based on Eigen face approach using Principal Component Analysis. The success rate of multimodal system using speech and face is higher when compared to individual unimodal recognition systems

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Garment information tracking is required for clean room garment management. In this paper, we present a camera-based robust system with implementation of Optical Character Reconition (OCR) techniques to fulfill garment label recognition. In the system, a camera is used for image capturing; an adaptive thresholding algorithm is employed to generate binary images; Connected Component Labelling (CCL) is then adopted for object detection in the binary image as a part of finding the ROI (Region of Interest); Artificial Neural Networks (ANNs) with the BP (Back Propagation) learning algorithm are used for digit recognition; and finally the system is verified by a system database. The system has been tested. The results show that it is capable of coping with variance of lighting, digit twisting, background complexity, and font orientations. The system performance with association to the digit recognition rate has met the design requirement. It has achieved real-time and error-free garment information tracking during the testing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A predominance of small, dense low-density lipoprotein (LDL) is a major component of an atherogenic lipoprotein phenotype, and a common, but modifiable, source of increased risk for coronary heart disease in the free-living population. While much of the atherogenicity of small, dense LDL is known to arise from its structural properties, the extent to which an increase in the number of small, dense LDL particles (hyper-apoprotein B) contributes to this risk of coronary heart disease is currently unknown. This study reports a method for the recruitment of free-living individuals with an atherogenic lipoprotein phenotype for a fish-oil intervention trial, and critically evaluates the relationship between LDL particle number and the predominance of small, dense LDL. In this group, volunteers were selected through local general practices on the basis of a moderately raised plasma triacylglycerol (triglyceride) level (>1.5 mmol/l) and a low concentration of high-density-lipoprotein cholesterol (<1.1 mmol/l). The screening of LDL subclasses revealed a predominance of small, dense LDL (LDL subclass pattern B) in 62% of the cohort. As expected, subjects with LDL subclass pattern B were characterized by higher plasma triacylglycerol and lower high-density lipoprotein cholesterol (<1.1 mmol/l) levels and, less predictably, by lower LDL cholesterol and apoprotein B levels (P<0.05; LDL subclass A compared with subclass B). While hyper-apoprotein B was detected in only five subjects, the relative percentage of small, dense LDL-III in subjects with subclass B showed an inverse relationship with LDL apoprotein B (r=-0.57; P<0.001), identifying a subset of individuals with plasma triacylglycerol above 2.5 mmol/l and a low concentration of LDL almost exclusively in a small and dense form. These findings indicate that a predominance of small, dense LDL and hyper-apoprotein B do not always co-exist in free-living groups. Moreover, if coronary risk increases with increasing LDL particle number, these results imply that the risk arising from a predominance of small, dense LDL may actually be reduced in certain cases when plasma triacylglycerol exceeds 2.5 mmol/l.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Numerical weather prediction (NWP) centres use numerical models of the atmospheric flow to forecast future weather states from an estimate of the current state. Variational data assimilation (VAR) is used commonly to determine an optimal state estimate that miminizes the errors between observations of the dynamical system and model predictions of the flow. The rate of convergence of the VAR scheme and the sensitivity of the solution to errors in the data are dependent on the condition number of the Hessian of the variational least-squares objective function. The traditional formulation of VAR is ill-conditioned and hence leads to slow convergence and an inaccurate solution. In practice, operational NWP centres precondition the system via a control variable transform to reduce the condition number of the Hessian. In this paper we investigate the conditioning of VAR for a single, periodic, spatially-distributed state variable. We present theoretical bounds on the condition number of the original and preconditioned Hessians and hence demonstrate the improvement produced by the preconditioning. We also investigate theoretically the effect of observation position and error variance on the preconditioned system and show that the problem becomes more ill-conditioned with increasingly dense and accurate observations. Finally, we confirm the theoretical results in an operational setting by giving experimental results from the Met Office variational system.