977 resultados para sicurezza safety error detection


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Executive control refers to a set of abilities enabling us to plan, control and implement our behavior to rapidly and flexibly adapt to environmental requirements. These adaptations notably involve the suppression of intended or ongoing cognitive or motor processes, a skill referred to as "inhibitory control". To implement efficient executive control of behavior, one must monitor our performance following errors to adjust our behavior accordingly. Deficits in inhibitory control have been associated with the emergènce of a wide range of psychiatric disorders, ranging from drug addiction to attention deficit/hyperactivity disorders. Inhibitory control deficits could, however, be remediated- The brain has indeed the amazing possibility to reorganize following training to allow for behavioral improvements. This mechanism is referred to as neural and behavioral plasticity. Here, our aim is to investigate training-induced plasticity in inhibitory control and propose a model of inhibitory control explaining the spatio- temporal brain mechanisms supporting inhibitory control processes and their plasticity. In the two studies entitled "Brain dynamics underlying training-induced improvement in suppressing inappropriate action" (Manuel et al., 2010) and "Training-induced neuroplastic reinforcement óf top-down inhibitory control" (Manuel et al., 2012c), we investigated the neurophysiological and behavioral changes induced by inhibitory control training with two different tasks and populations of healthy participants. We report that different inhibitory control training developed either automatic/bottom-up inhibition in parietal areas or reinforced controlled/top-down inhibitory control in frontal brain regions. We discuss the results of both studies in the light of a model of fronto-basal inhibition processes. In "Spatio-temporal brain dynamics mediating post-error behavioral adjustments" (Manuel et al., 2012a), we investigated how error detection modulates the processing of following stimuli and in turn impact behavior. We showed that during early integration of stimuli, the activity of prefrontal and parietal areas is modulated according to previous performance and impacts the post-error behavioral adjustments. We discuss these results in terms of a shift from an automatic to a controlled form of inhibition induced by the detection of errors, which in turn influenced response speed. In "Inter- and intra-hemispheric dissociations in ideomotor apraxia: a large-scale lesion- symptom mapping study in subacute brain-damaged patients" (Manuel et al., 2012b), we investigated ideomotor apraxia, a deficit in performing pantomime gestures of object use, and identified the anatomical correlates of distinct ideomotor apraxia error types in 150 subacute brain-damaged patients. Our results reveal a left intra-hemispheric dissociation for different pantomime error types, but with an unspecific role for inferior frontal areas. Les fonctions exécutives désignent un ensemble de processus nous permettant de planifier et contrôler notre comportement afin de nous adapter de manière rapide et flexible à l'environnement. L'une des manières de s'adapter consiste à arrêter un processus cognitif ou moteur en cours ; le contrôle de l'inhibition. Afin que le contrôle exécutif soit optimal il est nécessaire d'ajuster notre comportement après avoir fait des erreurs. Les déficits du contrôle de l'inhibition sont à l'origine de divers troubles psychiatriques tels que l'addiction à la drogue ou les déficits d'attention et d'hyperactivité. De tels déficits pourraient être réhabilités. En effet, le cerveau a l'incroyable capacité de se réorganiser après un entraînement et ainsi engendrer des améliorations comportementales. Ce mécanisme s'appelle la plasticité neuronale et comportementale. Ici, notre but èst d'étudier la plasticité du contrôle de l'inhibition après un bref entraînement et de proposer un modèle du contrôle de l'inhibition qui permette d'expliquer les mécanismes cérébraux spatiaux-temporels sous-tendant l'amélioration du contrôle de l'inhibition et de leur plasticité. Dans les deux études intitulées "Brain dynamics underlying training-induced improvement in suppressing inappropriate action" (Manuel et al., 2010) et "Training-induced neuroplastic reinforcement of top-down inhibitory control" (Manuel et al., 2012c), nous nous sommes intéressés aux changements neurophysiologiques et comportementaux liés à un entraînement du contrôle de l'inhibition. Pour ce faire, nous avons étudié l'inhibition à l'aide de deux différentes tâches et deux populations de sujets sains. Nous avons démontré que différents entraînements pouvaient soit développer une inhibition automatique/bottom-up dans les aires pariétales soit renforcer une inhibition contrôlée/top-down dans les aires frontales. Nous discutons ces résultats dans le contexte du modèle fronto-basal du contrôle de l'inhibition. Dans "Spatio-temporal brain dynamics mediating post-error behavioral adjustments" (Manuel et al., 2012a), nous avons investigué comment la détection d'erreurs influençait le traitement du prochain stimulus et comment elle agissait sur le comportement post-erreur. Nous avons montré que pendant l'intégration précoce des stimuli, l'activité des aires préfrontales et pariétales était modulée en fonction de la performance précédente et avait un impact sur les ajustements post-erreur. Nous proposons que la détection d'erreur ait induit un « shift » d'un mode d'inhibition automatique à un mode contrôlé qui a à son tour influencé le temps de réponse. Dans "Inter- and intra-hemispheric dissociations in ideomotor apraxia: a large-scale lesion-symptom mapping study in subacute brain-damaged patients" (Manuel et al., 2012b), nous avons examiné l'apraxie idémotrice, une incapacité à exécuter des gestes d'utilisation d'objets, chez 150 patients cérébro-lésés. Nous avons mis en avant une dissociation intra-hémisphérique pour différents types d'erreurs avec un rôle non spécifique pour les aires frontales inférieures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In vivo dosimetry is a way to verify the radiation dose delivered to the patient in measuring the dose generally during the first fraction of the treatment. It is the only dose delivery control based on a measurement performed during the treatment. In today's radiotherapy practice, the dose delivered to the patient is planned using 3D dose calculation algorithms and volumetric images representing the patient. Due to the high accuracy and precision necessary in radiation treatments, national and international organisations like ICRU and AAPM recommend the use of in vivo dosimetry. It is also mandatory in some countries like France. Various in vivo dosimetry methods have been developed during the past years. These methods are point-, line-, plane- or 3D dose controls. A 3D in vivo dosimetry provides the most information about the dose delivered to the patient, with respect to ID and 2D methods. However, to our knowledge, it is generally not routinely applied to patient treatments yet. The aim of this PhD thesis was to determine whether it is possible to reconstruct the 3D delivered dose using transmitted beam measurements in the context of narrow beams. An iterative dose reconstruction method has been described and implemented. The iterative algorithm includes a simple 3D dose calculation algorithm based on the convolution/superposition principle. The methodology was applied to narrow beams produced by a conventional 6 MV linac. The transmitted dose was measured using an array of ion chambers, as to simulate the linear nature of a tomotherapy detector. We showed that the iterative algorithm converges quickly and reconstructs the dose within a good agreement (at least 3% / 3 mm locally), which is inside the 5% recommended by the ICRU. Moreover it was demonstrated on phantom measurements that the proposed method allows us detecting some set-up errors and interfraction geometry modifications. We also have discussed the limitations of the 3D dose reconstruction for dose delivery error detection. Afterwards, stability tests of the tomotherapy MVCT built-in onboard detector was performed in order to evaluate if such a detector is suitable for 3D in-vivo dosimetry. The detector showed stability on short and long terms comparable to other imaging devices as the EPIDs, also used for in vivo dosimetry. Subsequently, a methodology for the dose reconstruction using the tomotherapy MVCT detector is proposed in the context of static irradiations. This manuscript is composed of two articles and a script providing further information related to this work. In the latter, the first chapter introduces the state-of-the-art of in vivo dosimetry and adaptive radiotherapy, and explains why we are interested in performing 3D dose reconstructions. In chapter 2 a dose calculation algorithm implemented for this work is reviewed with a detailed description of the physical parameters needed for calculating 3D absorbed dose distributions. The tomotherapy MVCT detector used for transit measurements and its characteristics are described in chapter 3. Chapter 4 contains a first article entitled '3D dose reconstruction for narrow beams using ion chamber array measurements', which describes the dose reconstruction method and presents tests of the methodology on phantoms irradiated with 6 MV narrow photon beams. Chapter 5 contains a second article 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations. A dose reconstruction process specific to the use of the tomotherapy MVCT detector is presented in chapter 6. A discussion and perspectives of the PhD thesis are presented in chapter 7, followed by a conclusion in chapter 8. The tomotherapy treatment device is described in appendix 1 and an overview of 3D conformai- and intensity modulated radiotherapy is presented in appendix 2. - La dosimétrie in vivo est une technique utilisée pour vérifier la dose délivrée au patient en faisant une mesure, généralement pendant la première séance du traitement. Il s'agit de la seule technique de contrôle de la dose délivrée basée sur une mesure réalisée durant l'irradiation du patient. La dose au patient est calculée au moyen d'algorithmes 3D utilisant des images volumétriques du patient. En raison de la haute précision nécessaire lors des traitements de radiothérapie, des organismes nationaux et internationaux tels que l'ICRU et l'AAPM recommandent l'utilisation de la dosimétrie in vivo, qui est devenue obligatoire dans certains pays dont la France. Diverses méthodes de dosimétrie in vivo existent. Elles peuvent être classées en dosimétrie ponctuelle, planaire ou tridimensionnelle. La dosimétrie 3D est celle qui fournit le plus d'information sur la dose délivrée. Cependant, à notre connaissance, elle n'est généralement pas appliquée dans la routine clinique. Le but de cette recherche était de déterminer s'il est possible de reconstruire la dose 3D délivrée en se basant sur des mesures de la dose transmise, dans le contexte des faisceaux étroits. Une méthode itérative de reconstruction de la dose a été décrite et implémentée. L'algorithme itératif contient un algorithme simple basé sur le principe de convolution/superposition pour le calcul de la dose. La dose transmise a été mesurée à l'aide d'une série de chambres à ionisations alignées afin de simuler la nature linéaire du détecteur de la tomothérapie. Nous avons montré que l'algorithme itératif converge rapidement et qu'il permet de reconstruire la dose délivrée avec une bonne précision (au moins 3 % localement / 3 mm). De plus, nous avons démontré que cette méthode permet de détecter certaines erreurs de positionnement du patient, ainsi que des modifications géométriques qui peuvent subvenir entre les séances de traitement. Nous avons discuté les limites de cette méthode pour la détection de certaines erreurs d'irradiation. Par la suite, des tests de stabilité du détecteur MVCT intégré à la tomothérapie ont été effectués, dans le but de déterminer si ce dernier peut être utilisé pour la dosimétrie in vivo. Ce détecteur a démontré une stabilité à court et à long terme comparable à d'autres détecteurs tels que les EPIDs également utilisés pour l'imagerie et la dosimétrie in vivo. Pour finir, une adaptation de la méthode de reconstruction de la dose a été proposée afin de pouvoir l'implémenter sur une installation de tomothérapie. Ce manuscrit est composé de deux articles et d'un script contenant des informations supplémentaires sur ce travail. Dans ce dernier, le premier chapitre introduit l'état de l'art de la dosimétrie in vivo et de la radiothérapie adaptative, et explique pourquoi nous nous intéressons à la reconstruction 3D de la dose délivrée. Dans le chapitre 2, l'algorithme 3D de calcul de dose implémenté pour ce travail est décrit, ainsi que les paramètres physiques principaux nécessaires pour le calcul de dose. Les caractéristiques du détecteur MVCT de la tomothérapie utilisé pour les mesures de transit sont décrites dans le chapitre 3. Le chapitre 4 contient un premier article intitulé '3D dose reconstruction for narrow beams using ion chamber array measurements', qui décrit la méthode de reconstruction et présente des tests de la méthodologie sur des fantômes irradiés avec des faisceaux étroits. Le chapitre 5 contient un second article intitulé 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations'. Un procédé de reconstruction de la dose spécifique pour l'utilisation du détecteur MVCT de la tomothérapie est présenté au chapitre 6. Une discussion et les perspectives de la thèse de doctorat sont présentées au chapitre 7, suivies par une conclusion au chapitre 8. Le concept de la tomothérapie est exposé dans l'annexe 1. Pour finir, la radiothérapie «informationnelle 3D et la radiothérapie par modulation d'intensité sont présentées dans l'annexe 2.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tässä diplomityössä esitellään langattoman mittaus- ja valvontajärjestelmän protokollakehitys. Työssä selvitetään protokollakehityksessä huomioon otettavat asiat ja esitetään langattoman tilavalvontaan perustuvan pilottijärjestelmän toteutus. Pilottijärjestelmänä käytetään Ensto Busch-Jaeger Oy:n Jussi-kosteusvahtijärjestelmää, joka muutetaan langattomaksi. Järjestelmän tiedonsiirto on yksisuuntaista ja tapahtuu radioyhteydellä. Käytetty taajuus on 433,92 MHz. Tavoitteena työssä oli kehittää yksinkertainen, mutta luotettava signalointijärjestelmä. Siihen toteutettu protokolla koodaa lähetettävän datan NRZ-L -koodauksen tapaisesti. Virheenkorjaus tehdään pariteettibittiä ja Hamming-etäisyyttä hyväksi käyttäen. Lisäksi tiedonsiirron yhteyskäytäntöön on lisätty rinnakkaisuutta yksisuuntaisen tiedonsiirron varmistamiseksi. Kehitetylle protokollalle tehdyt testit osoittavat sen olevan luotettava valitussa tiedonsiirtoympäristössä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diplomityössä käsitellään Nokia Mobile Phonesin matkapuhelimien käyttöliittymäohjelmistojen suunnittelu-ja testausympäristön kehitystä. Ympäristöön lisättiin kaksi ohjelmistomodulia avustamaan simulointia ja versionhallintaa. Visualisointityökalulla matkapuhelimen toiminta voidaan jäljittää suunnittelu- kaavioihin tilasiirtyminä, kun taas vertailusovelluksella kaavioiden väliset erot nähdään graafisesti. Kehitetyt sovellukset parantavat käyttöliittymien suunnitteluprosessia tehostaen virheiden etsintää, optimointia ja versionhallintaa. Visualisointityökalun edut ovat merkittävät, koska käyttöliittymäsovellusten toiminta on havaittavissa suunnittelu- kaavioista reaaliaikaisen simuloinnin yhteydessä. Näin virheet ovat välittömästi paikannettavissa. Lisäksi työkalua voidaan hyödyntää kaavioita optimoitaessa, jolloin sovellusten kokoja muistintarve pienenee. Graafinen vertailutyökalu tuo edun rinnakkaiseen ohjelmistosuunnitteluun. Eri versioisten suunnittelukaavioiden erot ovat nähtävissä suoraan kaaviosta manuaalisen vertailun sijaan. Molemmat työkalut otettiin onnistuneesti käyttöön NMP:llä vuoden 2001 alussa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Nucleus accumbens (Nacc) has been proposed to act as a limbic-motor interface. Here, using invasive intraoperative recordings in an awake patient suffering from obsessive-compulsive disease (OCD), we demonstrate that its activity is modulated by the quality of performance of the subject in a choice reaction time task designed to tap action monitoring processes. Action monitoring, that is, error detection and correction, is thought to be supported by a system involving the dopaminergic midbrain, the basal ganglia, and the medial prefrontal cortex. In surface electrophysiological recordings, action monitoring is indexed by an error-related negativity (ERN) appearing time-locked to the erroneous responses and emanating from the medial frontal cortex. In preoperative scalp recordings the patient's ERN was found to be signifi cantly increased compared to a large (n = 83) normal sample, suggesting enhanced action monitoring processes. Intraoperatively, error-related modulations were obtained from the Nacc but not from a site 5 mm above. Importantly, crosscorrelation analysis showed that error-related activity in the Nacc preceded surface activity by 40 ms. We propose that the Nacc is involved in action monitoring, possibly by using error signals from the dopaminergic midbrain to adjust the relative impact of limbic and prefrontal inputs on frontal control systems in order to optimize goal-directed behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Currently, laser scribing is growing material processing method in the industry. Benefits of laser scribing technology are studied for example for improving an efficiency of solar cells. Due high-quality requirement of the fast scribing process, it is important to monitor the process in real time for detecting possible defects during the process. However, there is a lack of studies of laser scribing real time monitoring. Commonly used monitoring methods developed for other laser processes such a laser welding, are sufficient slow and existed applications cannot be implemented in fast laser scribing monitoring. The aim of this thesis is to find a method for laser scribing monitoring with a high-speed camera and evaluate reliability and performance of the developed monitoring system with experiments. The laser used in experiments is an IPG ytterbium pulsed fiber laser with 20 W maximum average power and Scan head optics used in the laser is Scanlab’s Hurryscan 14 II with an f100 tele-centric lens. The camera was connected to laser scanner using camera adapter to follow the laser process. A powerful fully programmable industrial computer was chosen for executing image processing and analysis. Algorithms for defect analysis, which are based on particle analysis, were developed using LabVIEW system design software. The performance of the algorithms was analyzed by analyzing a non-moving image from the scribing line with resolution 960x20 pixel. As a result, the maximum analysis speed was 560 frames per second. Reliability of the algorithm was evaluated by imaging scribing path with a variable number of defects 2000 mm/s when the laser was turned off and image analysis speed was 430 frames per second. The experiment was successful and as a result, the algorithms detected all defects from the scribing path. The final monitoring experiment was performed during a laser process. However, it was challenging to get active laser illumination work with the laser scanner due physical dimensions of the laser lens and the scanner. For reliable error detection, the illumination system is needed to be replaced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DNA assembly is among the most fundamental and difficult problems in bioinformatics. Near optimal assembly solutions are available for bacterial and small genomes, however assembling large and complex genomes especially the human genome using Next-Generation-Sequencing (NGS) technologies is shown to be very difficult because of the highly repetitive and complex nature of the human genome, short read lengths, uneven data coverage and tools that are not specifically built for human genomes. Moreover, many algorithms are not even scalable to human genome datasets containing hundreds of millions of short reads. The DNA assembly problem is usually divided into several subproblems including DNA data error detection and correction, contig creation, scaffolding and contigs orientation; each can be seen as a distinct research area. This thesis specifically focuses on creating contigs from the short reads and combining them with outputs from other tools in order to obtain better results. Three different assemblers including SOAPdenovo [Li09], Velvet [ZB08] and Meraculous [CHS+11] are selected for comparative purposes in this thesis. Obtained results show that this thesis’ work produces comparable results to other assemblers and combining our contigs to outputs from other tools, produces the best results outperforming all other investigated assemblers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cryptosystem using linear codes was developed in 1978 by Mc-Eliece. Later in 1985 Niederreiter and others developed a modified version of cryptosystem using concepts of linear codes. But these systems were not used frequently because of its larger key size. In this study we were designing a cryptosystem using the concepts of algebraic geometric codes with smaller key size. Error detection and correction can be done efficiently by simple decoding methods using the cryptosystem developed. Approach: Algebraic geometric codes are codes, generated using curves. The cryptosystem use basic concepts of elliptic curves cryptography and generator matrix. Decrypted information takes the form of a repetition code. Due to this complexity of decoding procedure is reduced. Error detection and correction can be carried out efficiently by solving a simple system of linear equations, there by imposing the concepts of security along with error detection and correction. Results: Implementation of the algorithm is done on MATLAB and comparative analysis is also done on various parameters of the system. Attacks are common to all cryptosystems. But by securely choosing curve, field and representation of elements in field, we can overcome the attacks and a stable system can be generated. Conclusion: The algorithm defined here protects the information from an intruder and also from the error in communication channel by efficient error correction methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Code clones are portions of source code which are similar to the original program code. The presence of code clones is considered as a bad feature of software as the maintenance of software becomes difficult due to the presence of code clones. Methods for code clone detection have gained immense significance in the last few years as they play a significant role in engineering applications such as analysis of program code, program understanding, plagiarism detection, error detection, code compaction and many more similar tasks. Despite of all these facts, several features of code clones if properly utilized can make software development process easier. In this work, we have pointed out such a feature of code clones which highlight the relevance of code clones in test sequence identification. Here program slicing is used in code clone detection. In addition, a classification of code clones is presented and the benefit of using program slicing in code clone detection is also mentioned in this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The general packet radio service (GPRS) has been developed to allow packet data to be transported efficiently over an existing circuit-switched radio network, such as GSM. The main application of GPRS are in transporting Internet protocol (IP) datagrams from web servers (for telemetry or for mobile Internet browsers). Four GPRS baseband coding schemes are defined to offer a trade-off in requested data rates versus propagation channel conditions. However, data rates in the order of > 100 kbits/s are only achievable if the simplest coding scheme is used (CS-4) which offers little error detection and correction (EDC) (requiring excellent SNR) and the receiver hardware is capable of full duplex which is not currently available in the consumer market. A simple EDC scheme to improve the GPRS block error rate (BLER) performance is presented, particularly for CS-4, however gains in other coding schemes are seen. For every GPRS radio block that is corrected by the EDC scheme, the block does not need to be retransmitted releasing bandwidth in the channel and improving the user's application data rate. As GPRS requires intensive processing in the baseband, a viable field programmable gate array (FPGA) solution is presented in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The General Packet Radio Service (GPRS) was developed to allow packet data to be transported efficiently over an existing circuit switched radio network. The main applications for GPRS are in transporting IP datagram’s from the user’s mobile Internet browser to and from the Internet, or in telemetry equipment. A simple Error Detection and Correction (EDC) scheme to improve the GPRS Block Error Rate (BLER) performance is presented, particularly for coding scheme 4 (CS-4), however gains in other coding schemes are seen. For every GPRS radio block that is corrected by the EDC scheme, the block does not need to be retransmitted releasing bandwidth in the channel, improving throughput and the user’s application data rate. As GPRS requires intensive processing in the baseband, a viable hardware solution for a GPRS BLER co-processor is discussed that has been currently implemented in a Field Programmable Gate Array (FPGA) and presented in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Individual differences in cognitive style can be characterized along two dimensions: ‘systemizing’ (S, the drive to analyze or build ‘rule-based’ systems) and ‘empathizing’ (E, the drive to identify another's mental state and respond to this with an appropriate emotion). Discrepancies between these two dimensions in one direction (S > E) or the other (E > S) are associated with sex differences in cognition: on average more males show an S > E cognitive style, while on average more females show an E > S profile. The neurobiological basis of these different profiles remains unknown. Since individuals may be typical or atypical for their sex, it is important to move away from the study of sex differences and towards the study of differences in cognitive style. Using structural magnetic resonance imaging we examined how neuroanatomy varies as a function of the discrepancy between E and S in 88 adult males from the general population. Selecting just males allows us to study discrepant E-S profiles in a pure way, unconfounded by other factors related to sex and gender. An increasing S > E profile was associated with increased gray matter volume in cingulate and dorsal medial prefrontal areas which have been implicated in processes related to cognitive control, monitoring, error detection, and probabilistic inference. An increasing E > S profile was associated with larger hypothalamic and ventral basal ganglia regions which have been implicated in neuroendocrine control, motivation and reward. These results suggest an underlying neuroanatomical basis linked to the discrepancy between these two important dimensions of individual differences in cognitive style.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This exploratory study is concerned with the performance of Egyptian children with Down syndrome on counting and error detection tasks and investigates how these children acquire counting. Observations and interviews were carried out to collect further information about their performance in a class context. Qualitative and quantitative analysis suggested a notable deficit in counting in Egyptian children with Down syndrome with none of the children able to recite the number string up to ten or count a set of five objects correctly. They performed less well on tasks which added more load on memory. The tentative finding of this exploratory study supported previous research findings that children with Down syndrome acquire counting by rote and links this with their learning experiences.