991 resultados para Mathematical functions


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis studies role based access control and its suitability in the enterprise environment. The aim is to research how extensively role based access control can be implemented in the case organization and how it support organization’s business and IT functions. This study points out the enterprise’s needs for access control, factors of access control in the enterprise environment and requirements for implementation and the benefits and challenges it brings along. To find the scope how extensively role based access control can be implemented into the case organization, firstly is examined the actual state of access control. Secondly is defined a rudimentary desired state (how things should be) and thirdly completed it by using the results of the implementation of role based access control application. The study results the role model for case organization unit, and the building blocks and the framework for the organization wide implementation. Ultimate value for organization is delivered by facilitating the normal operations of the organization whilst protecting its information assets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The conversion of cellular prion protein (PrPc), a GPI-anchored protein, into a protease-K-resistant and infective form (generally termed PrPsc) is mainly responsible for Transmissible Spongiform Encephalopathies (TSEs), characterized by neuronal degeneration and progressive loss of basic brain functions. Although PrPc is expressed by a wide range of tissues throughout the body, the complete repertoire of its functions has not been fully determined. Recent studies have confirmed its participation in basic physiological processes such as cell proliferation and the regulation of cellular homeostasis. Other studies indicate that PrPc interacts with several molecules to activate signaling cascades with a high number of cellular effects. To determine PrPc functions, transgenic mouse models have been generated in the last decade. In particular, mice lacking specific domains of the PrPc protein have revealed the contribution of these domains to neurodegenerative processes. A dual role of PrPc has been shown, since most authors report protective roles for this protein while others describe pro-apoptotic functions. In this review, we summarize new findings on PrPc functions, especially those related to neural degeneration and cell signaling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The aim of the current study was to investigate the long-term cognitive effects of electroconvulsive therapy (ECT) in a sample of adolescent patients in whom schizophrenia spectrum disorders were diagnosed. Methods: The sample was composed of nine adolescent subjects in whom schizophrenia or schizoaffective disorder was diagnosed according to DSM-IV-TR criteria on whom ECT was conducted (ECT group) and nine adolescent subjects matched by age, socioeconomic status, and diagnostic and Positive and Negative Syndrome Scale (PANSS) total score at baseline on whom ECT was not conducted (NECT group). Clinical and neuropsychological assessments were carried out at baseline before ECT treatment and at 2-year follow-up. Results: Significant differences were found between groups in the number of unsuccessful medication trials. No statistically significant differences were found between the ECT group and theNECT group in either severity as assessed by the PANSS, or in any cognitive variables at baseline.At follow-up, both groups showed significant improvement in clinical variables (subscales of positive, general, and total scores of PANSS and Clinical Global Impressions-Improvement). In the cognitive assessment at follow-up, significant improvement was found in both groups in the semantic category of verbal fluency task and digits forward. However, no significant differences were found between groups in any clinical or cognitive variable at follow-up. Repeated measures analysis found no significant interaction of time · group in any clinical or neuropsychological measures. Conclusions: The current study showed no significant differences in change over time in clinical or neuropsychological variables between the ECT group and the NECT group at 2-year follow-up. Thus, ECT did not show any negative influence on long-term neuropsychological variables in our sample.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous functional MRI (fMRI) studies have associated anterior hippocampus with imagining and recalling scenes, imagining the future, recalling autobiographical memories and visual scene perception. We have observed that this typically involves the medial rather than the lateral portion of the anterior hippocampus. Here, we investigated which specific structures of the hippocampus underpin this observation. We had participants imagine novel scenes during fMRI scanning, as well as recall previously learned scenes from two different time periods (one week and 30 min prior to scanning), with analogous single object conditions as baselines. Using an extended segmentation protocol focussing on anterior hippocampus, we first investigated which substructures of the hippocampus respond to scenes, and found both imagination and recall of scenes to be associated with activity in presubiculum/parasubiculum, a region associated with spatial representation in rodents. Next, we compared imagining novel scenes to recall from one week or 30 min before scanning. We expected a strong response to imagining novel scenes and 1-week recall, as both involve constructing scene representations from elements stored across cortex. By contrast, we expected a weaker response to 30-min recall, as representations of these scenes had already been constructed but not yet consolidated. Both imagination and 1-week recall of scenes engaged anterior hippocampal structures (anterior subiculum and uncus respectively), indicating possible roles in scene construction. By contrast, 30-min recall of scenes elicited significantly less activation of anterior hippocampus but did engage posterior CA3. Together, these results elucidate the functions of different parts of the anterior hippocampus, a key brain area about which little is definitely known.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper seeks to address the problem of the empirical identification of housing market segmentation,once we assume that submarkets exist. The typical difficulty in identifying housing submarkets when dealing with many locations is the vast number of potential solutions and, in such cases, the use of the Chow test for hedonic functions is not a practical solution. Here, we solve this problem by undertaking an identification process with a heuristic for spatially constrained clustering, the"Housing Submarket Identifier" (HouSI). The solution is applied to the housing market in the city of Barcelona (Spain), where we estimate a hedonic model for fifty thousand dwellings aggregated into ten groups. In order to determine the utility of the procedure we seek to verify whether the final solution provided by the heuristic is comparable with the division of the city into ten administrative districts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Peer-reviewed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a new kernel estimation of the cumulative distribution function based on transformation and on bias reducing techniques. We derive the optimal bandwidth that minimises the asymptotic integrated mean squared error. The simulation results show that our proposed kernel estimation improves alternative approaches when the variable has an extreme value distribution with heavy tail and the sample size is small.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neural Networks are a set of mathematical methods and computer programs designed to simulate the information process and the knowledge acquisition of the human brain. In last years its application in chemistry is increasing significantly, due the special characteristics for model complex systems. The basic principles of two types of neural networks, the multi-layer perceptrons and radial basis functions, are introduced, as well as, a pruning approach to architecture optimization. Two analytical applications based on near infrared spectroscopy are presented, the first one for determination of nitrogen content in wheat leaves using multi-layer perceptrons networks and second one for determination of BRIX in sugar cane juices using radial basis functions networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We prove that every transcendental meromorphic map $f$ with disconnected Julia set has a weakly repelling fixed point. This implies that the Julia set of Newton's method for finding zeroes of an entire map is connected. Moreover, extending a result of Cowen for holomorphic self-maps of the disc, we show the existence of absorbing domains for holomorphic self-maps of hyperbolic regions, whose iterates tend to a boundary point. In particular, the results imply that periodic Baker domains of Newton's method for entire maps are simply connected, which solves a well-known open question.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An extensive literature suggests a link between executive functions and aggressive behavior in humans, pointing mostly to an inverse relationship, i.e., increased tendencies toward aggression in individuals scoring low on executive function tests. This literature is limited, though, in terms of the groups studied and the measures of executive functions. In this paper, we present data from two studies addressing these issues. In a first behavioral study, we asked whether high trait aggressiveness is related to reduced executive functions. A sample of over 600 students performed in an extensive behavioral test battery including paradigms addressing executive functions such as the Eriksen Flanker task, Stroop task, n-back task, and Tower of London (TOL). High trait aggressive participants were found to have a significantly reduced latency score in the TOL, indicating more impulsive behavior compared to low trait aggressive participants. No other differences were detected. In an EEG-study, we assessed neural and behavioral correlates of error monitoring and response inhibition in participants who were characterized based on their laboratory-induced aggressive behavior in a competitive reaction time task. Participants who retaliated more in the aggression paradigm and had reduced frontal activity when being provoked did not, however, show any reduction in behavioral or neural correlates of executive control compared to the less aggressive participants. Our results question a strong relationship between aggression and executive functions at least for healthy, high-functioning people.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several studies have suggested a bilingual advantage in executive functions, presumably due to bilinguals' massive practice with language switching that requires executive resources, but the results are still somewhat controversial. Previous studies are also plagued by the inherent limitations of a natural groups design where the participant groups are bound to differ in many ways in addition to the variable used to classify them. In an attempt to introduce a complementary analysis approach, we employed multiple regression to study whether the performance of 30- to 75-year-old FinnishSwedish bilinguals (N = 38) on tasks measuring different executive functions (inhibition, updating, and set shifting) could be predicted by the frequency of language switches in everyday life (as measured by a language switching questionnaire), L2 age of acquisition, or by the self-estimated degree of use of both languages in everyday life. Most consistent effects were found for the set shifting task where a higher rate of everyday language switches was related to a smaller mixing cost in errors. Mixing cost is thought to reflect top-down management of competing task sets, thus resembling the bilingual situation where decisions of which language to use has to be made in each conversation. These findings provide additional support to the idea that some executive functions in bilinguals are affected by a lifelong experience in language switching and, perhaps even more importantly, suggest a complementary approach to the study of this issue.