143 resultados para Applications for positions.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

On the efficiency of recursive evaluations with applications to risk theoryCette thèse est composée de trois essais qui portent sur l'efficacité des évaluations récursives de la distribution du montant total des sinistres d'un portefeuille de polices d'assurance au cours d'un période donnée. Le calcul de sa fonction de probabilité ou de quantités liées à cette distribution apparaît fréquemment dans la plupart des domaines de la pratique actuarielle.C'est le cas notamment pour le calcul du capital de solvabilité en Suisse ou pour modéliser la perte d'une assurance vie au cours d'une année. Le principal problème des évaluations récursives est que la propagation des erreurs provenant de la représentation des nombres réels par l'ordinateur peut être désastreuse. Mais, le gain de temps qu'elles procurent en réduisant le nombre d'opérations arithmétiques est substantiel par rapport à d'autres méthodes.Dans le premier essai, nous utilisons certaines propriétés d'un outil informatique performant afin d'optimiser le temps de calcul tout en garantissant une certaine qualité dans les résultats par rapport à la propagation de ces erreurs au cours de l'évaluation.Dans le second essai, nous dérivons des expressions exactes et des bornes pour les erreurs qui se produisent dans les fonctions de distribution cumulatives d'un ordre donné lorsque celles-ci sont évaluées récursivement à partir d'une approximation de la transformée de De Pril associée. Ces fonctions cumulatives permettent de calculer directement certaines quantités essentielles comme les primes stop-loss.Finalement, dans le troisième essai, nous étudions la stabilité des évaluations récursives de ces fonctions cumulatives par rapport à la propagation des erreurs citées ci-dessus et déterminons la précision nécessaire dans la représentation des nombres réels afin de garantir des résultats satisfaisants. Cette précision dépend en grande partie de la transformée de De Pril associée.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: Small intestinal submucosa is a xenogenic, acellular, collagen rich membrane with inherent growth factors that has previously been shown to promote in vivo bladder regeneration. We evaluate in vitro use of small intestinal submucosa to support the individual and combined growth of bladder urothelial cells and smooth muscle cells for potential use in tissue engineering techniques, and in vitro study of the cellular mechanisms involved in bladder regeneration. MATERIALS AND METHODS: Primary cultures of human bladder urothelial cells and smooth muscle cells were established using standard enzymatic digestion or explant techniques. Cultured cells were then seeded on small intestinal submucosa at a density of 1 x 105 cells per cm.2, incubated and harvested at 3, 7, 14 and 28 days. The 5 separate culture methods evaluated were urothelial cells seeded alone on the mucosal surface of small intestinal submucosa, smooth muscle cells seeded alone on the mucosal surface, layered coculture of smooth muscle cells seeded on the mucosal surface followed by urothelial cells 1 hour later, sandwich coculture of smooth muscle cells seeded on the serosal surface followed by seeding of urothelial cells on the mucosal surface 24 hours later, and mixed coculture of urothelial cells and smooth muscle cells mixed and seeded together on the mucosal surface. Following harvesting at the designated time points small intestinal submucosa cell constructs were formalin fixed and processed for routine histology including Masson trichrome staining. Specific cell growth characteristics were studied with particular attention to cell morphology, cell proliferation and layering, cell sorting, presence of a pseudostratified urothelium and matrix penetrance. To aid in the identification of smooth muscle cells and urothelial cells in the coculture groups, immunohistochemical analysis was performed with antibodies to alpha-smooth muscle actin and cytokeratins AE1/AE3. RESULTS: Progressive 3-dimensional growth of urothelial cells and smooth muscle cells occurred in vitro on small intestinal submucosa. When seeded alone urothelial cells and smooth muscle cells grew in several layers with minimal to no matrix penetration. In contrast, layered, mixed and sandwich coculture methods demonstrated significant enhancement of smooth muscle cell penetration of the membrane. The layered and sandwich coculture techniques resulted in organized cell sorting, formation of a well-defined pseudostratified urothelium and multilayered smooth muscle cells with enhanced matrix penetration. With the mixed coculture technique there was no evidence of cell sorting although matrix penetrance by the smooth muscle cells was evident. Immunohistochemical studies demonstrated that urothelial cells and smooth muscle cells maintain the expression of the phenotypic markers of differentiation alpha-smooth muscle actin and cytokeratins AE1/AE3. CONCLUSIONS: Small intestinal submucosa supports the 3-dimensional growth of human bladder cells in vitro. Successful combined growth of bladder cells on small intestinal submucosa with different seeding techniques has important future clinical implications with respect to tissue engineering technology. The results of our study demonstrate that there are important smooth muscle cell-epithelial cell interactions involved in determining the type of in vitro cell growth that occurs on small intestinal submucosa. Small intestinal submucosa is a valuable tool for in vitro study of the cell-cell and cell-matrix interactions that are involved in regeneration and various disease processes of the bladder.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A recurring task in the analysis of mass genome annotation data from high-throughput technologies is the identification of peaks or clusters in a noisy signal profile. Examples of such applications are the definition of promoters on the basis of transcription start site profiles, the mapping of transcription factor binding sites based on ChIP-chip data and the identification of quantitative trait loci (QTL) from whole genome SNP profiles. Input to such an analysis is a set of genome coordinates associated with counts or intensities. The output consists of a discrete number of peaks with respective volumes, extensions and center positions. We have developed for this purpose a flexible one-dimensional clustering tool, called MADAP, which we make available as a web server and as standalone program. A set of parameters enables the user to customize the procedure to a specific problem. The web server, which returns results in textual and graphical form, is useful for small to medium-scale applications, as well as for evaluation and parameter tuning in view of large-scale applications, requiring a local installation. The program written in C++ can be freely downloaded from ftp://ftp.epd.unil.ch/pub/software/unix/madap. The MADAP web server can be accessed at http://www.isrec.isb-sib.ch/madap/.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Peripheral assessment of bone density using photon absorptiometry techniques has been available for over 40 yr. The initial use of radio-isotopes as the photon source has been replaced by the use of X-ray technology. A wide variety of models of single- or dual-energy X-ray measurement tools have been made available for purchase, although not all are still commercially available. The Official Positions of the International Society for Clinical Densitometry (ISCD) have been developed following a systematic review of the literature by an ISCD task force and a subsequent Position Development Conference. These cover the technological diversity among peripheral dual-energy X-ray absorptiometry (pDXA) devices; define whether pDXA can be used for fracture risk assessment and/or to diagnose osteoporosis; examine whether pDXA can be used to initiate treatment and/or monitor treatment; provide recommendations for pDXA reporting; and review quality assurance and quality control necessary for effective use of pDXA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with the recruitment strategies of employers in the low-skilled segment of the labour market. We focus on low-skilled workers because they are overrepresented among jobless people and constitute the bulk of the clientele included in various activation and labour market programmes. A better understanding of the constraints and opportunities of interventions in this labour market segment may help improve their quality and effectiveness. On the basis of qualitative interviews with 41 employers in six European countries, we find that the traditional signals known to be used as statistical discrimination devices (old age, immigrant status and unemployment) play a somewhat reduced role, since these profiles are overrepresented among applicants for low skill positions. However, we find that other signals, mostly considered to be indicators of motivation, have a bigger impact in the selection process. These tend to concern the channel through which the contact with a prospective candidate is made. Unsolicited applications and recommendations from already employed workers emit a positive signal, whereas the fact of being referred by the public employment office is associated with the likelihood of lower motivation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract : This work is concerned with the development and application of novel unsupervised learning methods, having in mind two target applications: the analysis of forensic case data and the classification of remote sensing images. First, a method based on a symbolic optimization of the inter-sample distance measure is proposed to improve the flexibility of spectral clustering algorithms, and applied to the problem of forensic case data. This distance is optimized using a loss function related to the preservation of neighborhood structure between the input space and the space of principal components, and solutions are found using genetic programming. Results are compared to a variety of state-of--the-art clustering algorithms. Subsequently, a new large-scale clustering method based on a joint optimization of feature extraction and classification is proposed and applied to various databases, including two hyperspectral remote sensing images. The algorithm makes uses of a functional model (e.g., a neural network) for clustering which is trained by stochastic gradient descent. Results indicate that such a technique can easily scale to huge databases, can avoid the so-called out-of-sample problem, and can compete with or even outperform existing clustering algorithms on both artificial data and real remote sensing images. This is verified on small databases as well as very large problems. Résumé : Ce travail de recherche porte sur le développement et l'application de méthodes d'apprentissage dites non supervisées. Les applications visées par ces méthodes sont l'analyse de données forensiques et la classification d'images hyperspectrales en télédétection. Dans un premier temps, une méthodologie de classification non supervisée fondée sur l'optimisation symbolique d'une mesure de distance inter-échantillons est proposée. Cette mesure est obtenue en optimisant une fonction de coût reliée à la préservation de la structure de voisinage d'un point entre l'espace des variables initiales et l'espace des composantes principales. Cette méthode est appliquée à l'analyse de données forensiques et comparée à un éventail de méthodes déjà existantes. En second lieu, une méthode fondée sur une optimisation conjointe des tâches de sélection de variables et de classification est implémentée dans un réseau de neurones et appliquée à diverses bases de données, dont deux images hyperspectrales. Le réseau de neurones est entraîné à l'aide d'un algorithme de gradient stochastique, ce qui rend cette technique applicable à des images de très haute résolution. Les résultats de l'application de cette dernière montrent que l'utilisation d'une telle technique permet de classifier de très grandes bases de données sans difficulté et donne des résultats avantageusement comparables aux méthodes existantes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The worldwide prevalence of smoking has been estimated at about 50% in men, and 10% in women, with larger variations among different populations studied. Smoking has been shown to affect many organ systems resulting in severe morbidity and increased mortality. In addition, smoking has been identified as a predictor of ten-year fracture risk in men and women, largely independent of an individual's bone mineral density. This finding has eventually lead to incorporation of this risk factor into FRAX®, an algorithm that has been developed to calculate an individual's ten-year fracture risk. However, only little, or conflicting data is available on a possible association between smoking dose, duration, length of time after cessation, type of tobacco and fracture risk, limiting this risk factor's applicability in the context of FRAX®.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to determine the location and relative strength of all transcription-factor binding sites in a genome is important both for a comprehensive understanding of gene regulation and for effective promoter engineering in biotechnological applications. Here we present a bioinformatically driven experimental method to accurately define the DNA-binding sequence specificity of transcription factors. A generalized profile was used as a predictive quantitative model for binding sites, and its parameters were estimated from in vitro-selected ligands using standard hidden Markov model training algorithms. Computer simulations showed that several thousand low- to medium-affinity sequences are required to generate a profile of desired accuracy. To produce data on this scale, we applied high-throughput genomics methods to the biochemical problem addressed here. A method combining systematic evolution of ligands by exponential enrichment (SELEX) and serial analysis of gene expression (SAGE) protocols was coupled to an automated quality-controlled sequence extraction procedure based on Phred quality scores. This allowed the sequencing of a database of more than 10,000 potential DNA ligands for the CTF/NFI transcription factor. The resulting binding-site model defines the sequence specificity of this protein with a high degree of accuracy not achieved earlier and thereby makes it possible to identify previously unknown regulatory sequences in genomic DNA. A covariance analysis of the selected sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism.