994 resultados para solve
Resumo:
En este trabajo introducimos diversas clases de barreras del dividendo en la teoría modelo clásica de la ruina. Estudiamos la influencia de la estrategia de la barrera en probabilidad de la ruina. Un método basado en las ecuaciones de la renovación [Grandell (1991)], alternativa a la discusión diferenciada [Gerber (1975)], utilizado para conseguir las ecuaciones diferenciales parciales para resolver probabilidades de la supervivencia. Finalmente calculamos y comparamos las probabilidades de la supervivencia usando la barrera linear y parabólica del dividendo, con la ayuda de la simulación
Resumo:
En un gran nombre d'economies, l'evolució de la producció industrial s'analitza a partir de la informació sobre el Producte Industrial Brut i/o el Valor Afegit Brut que proporcionen les Comptabilitats Nacionals. A Espanya, la utilització d'aquestes dades presenta el problema que no estan disponibles tan ràpidament com seria desitjable. En conseqüència, no és possible realitzar un seguiment a curt termini de l'activitat industrial a partir dels mateixos. Per a solucionar aquest problema, l'Institut Nacional d'Estadística elabora un Índex de Producció Industrial mensual a partir de la informació obtinguda a través d'una enquesta dirigida a una mostra representativa de les empreses espanyoles. No obstant això, a nivell regional, les dificultats per a realitzar un seguiment de l'activitat industrial són majors a causa de l'escassesa d'informació estadística. Durant els últims anys, diferents institucions públiques i privades han començat a elaborar indicadors d'activitat per a algunes regions espanyoles, encara que a partir de metodologies no homogènies, de manera que aquests índexs no són directament comparables. Per a corregir aquesta situació, en diferents fòrums s'ha proposat emprar la metodologia utilitzada per l'Institut d'Estadística de Catalunya (IEC) per a la comunitat catalana com alternativa per a aquelles comunitats espanyoles que no disposen d'un indicador de l'activitat industrial, atès que per a Catalunya resulta una metodologia adequada. En aquest treball s'estudia la idoneïtat d'estendre aquesta metodologia a la resta de regions espanyoles. Per a això, es construeixen uns indicadors d'acord amb la metodologia del IEC i es comparen amb els índexs regionals obtinguts per mètodes directes per a tres de les quatre regions que existeixen: Andalusia, Astúries i Euskadi
Resumo:
Una de las herramientas estadísticas más importantes para el seguimiento y análisis de la evolución de la actividad económica a corto plazo es la disponibilidad de estimaciones de la evolución trimestral de los componentes del PIB, en lo que afecta tanto a la oferta como a la demanda. La necesidad de disponer de esta información con un retraso temporal reducido hace imprescindible la utilización de métodos de trimestralización que permitan desagregar la información anual a trimestral. El método más aplicado, puesto que permite resolver este problema de manera muy elegante bajo un enfoque estadístico de estimador óptimo, es el método de Chow-Lin. Pero este método no garantiza que las estimaciones trimestrales del PIB en lo que respecta a la oferta y a la demanda coincidan, haciendo necesaria la aplicación posterior de algún método de conciliación. En este trabajo se desarrolla una ampliación multivariante del método de Chow-Lin que permite resolver el problema de la estimación de los valores trimestrales de manera óptima, sujeta a un conjunto de restricciones. Una de las aplicaciones potenciales de este método, que hemos denominado método de Chow-Lin restringido, es precisamente la estimación conjunta de valores trimestrales para cada uno de los componentes del PIB en lo que afecta tanto a la demanda como a la oferta condicionada a que ambas estimaciones trimestrales del PIB sean iguales, evitando así la necesidad de aplicar posteriormente métodos de conciliación
Resumo:
One of the main questions to solve when analysing geographically added information consists of the design of territorial units adjusted to the objectives of the study. This is related with the reduction of the effects of the Modificable Areal Unit Problem (MAUP). In this paper an optimisation model to solve regionalisation problems is proposed. This model seeks to reduce disadvantages found in previous works about automated regionalisation tools
Resumo:
Introduction Pediatric intensive care patient represent a population athigh risk for drug-related problems. Our objective is to describe drugrelated problems and intervention of four decentralized pharmacists inpediatric and cardiac intensive care unit.Materials & Methods Multicentric, descriptive and prospectivestudy over a six-month period (August 1st 2009-January 31st 2010).Drug-related problems and clinical interventions were compiled infour pediatric centers using a tool developed by the Socie´te´ Franc¸aisede Pharmacie Clinique. Data concerning patients, drugs, intervention,documentation, approval (if needed), and estimated impact werecompiled. The four pharmacists participating were from Belgium (B),France (F), Quebec (Q) and Switzerland (S).Results A total of 996 interventions were collected: 129 (13%) in B,238 (24%) in F, 278 (28%) in Q and 351 (35%) in S. These interventionstargeted 269 patients (median 22 month-old, 52% male): 69(26%) in B, 88 (33%) in F, 56 (21%) in Q and in S. These data werecollected during 28 non consecutive days in the clinical unit in B, 59days in F, 42 days in Q and 63 days in S. The main drug-relatedproblems were inappropriate administration technique (293, 29%),untreated indication (254, 25%) and supra therapeutic dosage (106,11%). The pharmacist's interventions concerned mainly administrationmode optimization (223, 22%), dose adjustment (200, 20%) andtherapeutic monitoring (164, 16%). The three major drug classesleading to interventions were anti-infectives for systemic use (233,23%) and alimentary tract and metabolism drugs (218, 22%). Interventionsconcerned mainly residents and all clinical staff (209, 21%).Among the 879 (88%) interventions requiring a physician's approval,731 (83%) were accepted. Interventions were considered as having amoderate (51%) or major (17%) clinical impact. Among the interventionsprovided, 10% were considered to have an economicalpositive impact. Differences and similarities between countries willbe presented at the poster session.Discussion & Conclusion Decentralized pharmacist at patient bedsideis a pre-requisite for pharmaceutical care. There are limitedstudies comparing the activity of clinical pharmacists betweencountries. This descriptive study illustrates the ability of clinicalpharmacist to identify and solve drug-related problems in pediatricintensive care unit in four different francophone countries.
Resumo:
This paper reports on the purpose, design, methodology and target audience of E-learning courses in forensic interpretation offered by the authors since 2010, including practical experiences made throughout the implementation period of this project. This initiative was motivated by the fact that reporting results of forensic examinations in a logically correct and scientifically rigorous way is a daily challenge for any forensic practitioner. Indeed, interpretation of raw data and communication of findings in both written and oral statements are topics where knowledge and applied skills are needed. Although most forensic scientists hold educational records in traditional sciences, only few actually followed full courses that focussed on interpretation issues. Such courses should include foundational principles and methodology - including elements of forensic statistics - for the evaluation of forensic data in a way that is tailored to meet the needs of the criminal justice system. In order to help bridge this gap, the authors' initiative seeks to offer educational opportunities that allow practitioners to acquire knowledge and competence in the current approaches to the evaluation and interpretation of forensic findings. These cover, among other aspects, probabilistic reasoning (including Bayesian networks and other methods of forensic statistics, tools and software), case pre-assessment, skills in the oral and written communication of uncertainty, and the development of independence and self-confidence to solve practical inference problems. E-learning was chosen as a general format because it helps to form a trans-institutional online-community of practitioners from varying forensic disciplines and workfield experience such as reporting officers, (chief) scientists, forensic coordinators, but also lawyers who all can interact directly from their personal workplaces without consideration of distances, travel expenses or time schedules. In the authors' experience, the proposed learning initiative supports participants in developing their expertise and skills in forensic interpretation, but also offers an opportunity for the associated institutions and the forensic community to reinforce the development of a harmonized view with regard to interpretation across forensic disciplines, laboratories and judicial systems.
Resumo:
In this thesis, we study the use of prediction markets for technology assessment. We particularly focus on their ability to assess complex issues, the design constraints required for such applications and their efficacy compared to traditional techniques. To achieve this, we followed a design science research paradigm, iteratively developing, instantiating, evaluating and refining the design of our artifacts. This allowed us to make multiple contributions, both practical and theoretical. We first showed that prediction markets are adequate for properly assessing complex issues. We also developed a typology of design factors and design propositions for using these markets in a technology assessment context. Then, we showed that they are able to solve some issues related to the R&D portfolio management process and we proposed a roadmap for their implementation. Finally, by comparing the instantiation and the results of a multi-criteria decision method and a prediction market, we showed that the latter are more efficient, while offering similar results. We also proposed a framework for comparing forecasting methods, to identify the constraints based on contingency factors. In conclusion, our research opens a new field of application of prediction markets and should help hasten their adoption by enterprises. Résumé français: Dans cette thèse, nous étudions l'utilisation de marchés de prédictions pour l'évaluation de nouvelles technologies. Nous nous intéressons plus particulièrement aux capacités des marchés de prédictions à évaluer des problématiques complexes, aux contraintes de conception pour une telle utilisation et à leur efficacité par rapport à des techniques traditionnelles. Pour ce faire, nous avons suivi une approche Design Science, développant itérativement plusieurs prototypes, les instanciant, puis les évaluant avant d'en raffiner la conception. Ceci nous a permis de faire de multiples contributions tant pratiques que théoriques. Nous avons tout d'abord montré que les marchés de prédictions étaient adaptés pour correctement apprécier des problématiques complexes. Nous avons également développé une typologie de facteurs de conception ainsi que des propositions de conception pour l'utilisation de ces marchés dans des contextes d'évaluation technologique. Ensuite, nous avons montré que ces marchés pouvaient résoudre une partie des problèmes liés à la gestion des portes-feuille de projets de recherche et développement et proposons une feuille de route pour leur mise en oeuvre. Finalement, en comparant la mise en oeuvre et les résultats d'une méthode de décision multi-critère et d'un marché de prédiction, nous avons montré que ces derniers étaient plus efficaces, tout en offrant des résultats semblables. Nous proposons également un cadre de comparaison des méthodes d'évaluation technologiques, permettant de cerner au mieux les besoins en fonction de facteurs de contingence. En conclusion, notre recherche ouvre un nouveau champ d'application des marchés de prédiction et devrait permettre d'accélérer leur adoption par les entreprises.
Resumo:
Measuring school efficiency is a challenging task. First, a performance measurement technique has to be selected. Within Data Envelopment Analysis (DEA), one such technique, alternative models have been developed in order to deal with environmental variables. The majority of these models lead to diverging results. Second, the choice of input and output variables to be included in the efficiency analysis is often dictated by data availability. The choice of the variables remains an issue even when data is available. As a result, the choice of technique, model and variables is probably, and ultimately, a political judgement. Multi-criteria decision analysis methods can help the decision makers to select the most suitable model. The number of selection criteria should remain parsimonious and not be oriented towards the results of the models in order to avoid opportunistic behaviour. The selection criteria should also be backed by the literature or by an expert group. Once the most suitable model is identified, the principle of permanence of methods should be applied in order to avoid a change of practices over time. Within DEA, the two-stage model developed by Ray (1991) is the most convincing model which allows for an environmental adjustment. In this model, an efficiency analysis is conducted with DEA followed by an econometric analysis to explain the efficiency scores. An environmental variable of particular interest, tested in this thesis, consists of the fact that operations are held, for certain schools, on multiple sites. Results show that the fact of being located on more than one site has a negative influence on efficiency. A likely way to solve this negative influence would consist of improving the use of ICT in school management and teaching. Planning new schools should also consider the advantages of being located on a unique site, which allows reaching a critical size in terms of pupils and teachers. The fact that underprivileged pupils perform worse than privileged pupils has been public knowledge since Coleman et al. (1966). As a result, underprivileged pupils have a negative influence on school efficiency. This is confirmed by this thesis for the first time in Switzerland. Several countries have developed priority education policies in order to compensate for the negative impact of disadvantaged socioeconomic status on school performance. These policies have failed. As a result, other actions need to be taken. In order to define these actions, one has to identify the social-class differences which explain why disadvantaged children underperform. Childrearing and literary practices, health characteristics, housing stability and economic security influence pupil achievement. Rather than allocating more resources to schools, policymakers should therefore focus on related social policies. For instance, they could define pre-school, family, health, housing and benefits policies in order to improve the conditions for disadvantaged children.
Resumo:
Despite clear evidence of correlations between financial and medical statuses and decisions, most models treat financial and health-related choices separately. This article bridges this gap by proposing a tractable dynamic framework for the joint determination of optimal consumption, portfolio holdings, health investment, and health insurance. We solve for the optimal rules in closed form and capitalize on this tractability to gain a better understanding of the conditions under which separation between financial and health-related decisions is sensible, and of the pathways through which wealth and health determine allocations, welfare and other variables of interest such as expected longevity or the value of health. Furthermore we show that the model is consistent with the observed patterns of individual allocations and provide realistic estimates of the parameters that confirm the relevance of all the main characteristics of the model.
Resumo:
La présente étude est à la fois une évaluation du processus de la mise en oeuvre et des impacts de la police de proximité dans les cinq plus grandes zones urbaines de Suisse - Bâle, Berne, Genève, Lausanne et Zurich. La police de proximité (community policing) est à la fois une philosophie et une stratégie organisationnelle qui favorise un partenariat renouvelé entre la police et les communautés locales dans le but de résoudre les problèmes relatifs à la sécurité et à l'ordre public. L'évaluation de processus a analysé des données relatives aux réformes internes de la police qui ont été obtenues par l'intermédiaire d'entretiens semi-structurés avec des administrateurs clés des cinq départements de police, ainsi que dans des documents écrits de la police et d'autres sources publiques. L'évaluation des impacts, quant à elle, s'est basée sur des variables contextuelles telles que des statistiques policières et des données de recensement, ainsi que sur des indicateurs d'impacts construit à partir des données du Swiss Crime Survey (SCS) relatives au sentiment d'insécurité, à la perception du désordre public et à la satisfaction de la population à l'égard de la police. Le SCS est un sondage régulier qui a permis d'interroger des habitants des cinq grandes zones urbaines à plusieurs reprises depuis le milieu des années 1980. L'évaluation de processus a abouti à un « Calendrier des activités » visant à créer des données de panel permettant de mesurer les progrès réalisés dans la mise en oeuvre de la police de proximité à l'aide d'une grille d'évaluation à six dimensions à des intervalles de cinq ans entre 1990 et 2010. L'évaluation des impacts, effectuée ex post facto, a utilisé un concept de recherche non-expérimental (observational design) dans le but d'analyser les impacts de différents modèles de police de proximité dans des zones comparables à travers les cinq villes étudiées. Les quartiers urbains, délimités par zone de code postal, ont ainsi été regroupés par l'intermédiaire d'une typologie réalisée à l'aide d'algorithmes d'apprentissage automatique (machine learning). Des algorithmes supervisés et non supervisés ont été utilisés sur les données à haute dimensionnalité relatives à la criminalité, à la structure socio-économique et démographique et au cadre bâti dans le but de regrouper les quartiers urbains les plus similaires dans des clusters. D'abord, les cartes auto-organisatrices (self-organizing maps) ont été utilisées dans le but de réduire la variance intra-cluster des variables contextuelles et de maximiser simultanément la variance inter-cluster des réponses au sondage. Ensuite, l'algorithme des forêts d'arbres décisionnels (random forests) a permis à la fois d'évaluer la pertinence de la typologie de quartier élaborée et de sélectionner les variables contextuelles clés afin de construire un modèle parcimonieux faisant un minimum d'erreurs de classification. Enfin, pour l'analyse des impacts, la méthode des appariements des coefficients de propension (propensity score matching) a été utilisée pour équilibrer les échantillons prétest-posttest en termes d'âge, de sexe et de niveau d'éducation des répondants au sein de chaque type de quartier ainsi identifié dans chacune des villes, avant d'effectuer un test statistique de la différence observée dans les indicateurs d'impacts. De plus, tous les résultats statistiquement significatifs ont été soumis à une analyse de sensibilité (sensitivity analysis) afin d'évaluer leur robustesse face à un biais potentiel dû à des covariables non observées. L'étude relève qu'au cours des quinze dernières années, les cinq services de police ont entamé des réformes majeures de leur organisation ainsi que de leurs stratégies opérationnelles et qu'ils ont noué des partenariats stratégiques afin de mettre en oeuvre la police de proximité. La typologie de quartier développée a abouti à une réduction de la variance intra-cluster des variables contextuelles et permet d'expliquer une partie significative de la variance inter-cluster des indicateurs d'impacts avant la mise en oeuvre du traitement. Ceci semble suggérer que les méthodes de géocomputation aident à équilibrer les covariables observées et donc à réduire les menaces relatives à la validité interne d'un concept de recherche non-expérimental. Enfin, l'analyse des impacts a révélé que le sentiment d'insécurité a diminué de manière significative pendant la période 2000-2005 dans les quartiers se trouvant à l'intérieur et autour des centres-villes de Berne et de Zurich. Ces améliorations sont assez robustes face à des biais dus à des covariables inobservées et covarient dans le temps et l'espace avec la mise en oeuvre de la police de proximité. L'hypothèse alternative envisageant que les diminutions observées dans le sentiment d'insécurité soient, partiellement, un résultat des interventions policières de proximité semble donc être aussi plausible que l'hypothèse nulle considérant l'absence absolue d'effet. Ceci, même si le concept de recherche non-expérimental mis en oeuvre ne peut pas complètement exclure la sélection et la régression à la moyenne comme explications alternatives. The current research project is both a process and impact evaluation of community policing in Switzerland's five major urban areas - Basel, Bern, Geneva, Lausanne, and Zurich. Community policing is both a philosophy and an organizational strategy that promotes a renewed partnership between the police and the community to solve problems of crime and disorder. The process evaluation data on police internal reforms were obtained through semi-structured interviews with key administrators from the five police departments as well as from police internal documents and additional public sources. The impact evaluation uses official crime records and census statistics as contextual variables as well as Swiss Crime Survey (SCS) data on fear of crime, perceptions of disorder, and public attitudes towards the police as outcome measures. The SCS is a standing survey instrument that has polled residents of the five urban areas repeatedly since the mid-1980s. The process evaluation produced a "Calendar of Action" to create panel data to measure community policing implementation progress over six evaluative dimensions in intervals of five years between 1990 and 2010. The impact evaluation, carried out ex post facto, uses an observational design that analyzes the impact of the different community policing models between matched comparison areas across the five cities. Using ZIP code districts as proxies for urban neighborhoods, geospatial data mining algorithms serve to develop a neighborhood typology in order to match the comparison areas. To this end, both unsupervised and supervised algorithms are used to analyze high-dimensional data on crime, the socio-economic and demographic structure, and the built environment in order to classify urban neighborhoods into clusters of similar type. In a first step, self-organizing maps serve as tools to develop a clustering algorithm that reduces the within-cluster variance in the contextual variables and simultaneously maximizes the between-cluster variance in survey responses. The random forests algorithm then serves to assess the appropriateness of the resulting neighborhood typology and to select the key contextual variables in order to build a parsimonious model that makes a minimum of classification errors. Finally, for the impact analysis, propensity score matching methods are used to match the survey respondents of the pretest and posttest samples on age, gender, and their level of education for each neighborhood type identified within each city, before conducting a statistical test of the observed difference in the outcome measures. Moreover, all significant results were subjected to a sensitivity analysis to assess the robustness of these findings in the face of potential bias due to some unobserved covariates. The study finds that over the last fifteen years, all five police departments have undertaken major reforms of their internal organization and operating strategies and forged strategic partnerships in order to implement community policing. The resulting neighborhood typology reduced the within-cluster variance of the contextual variables and accounted for a significant share of the between-cluster variance in the outcome measures prior to treatment, suggesting that geocomputational methods help to balance the observed covariates and hence to reduce threats to the internal validity of an observational design. Finally, the impact analysis revealed that fear of crime dropped significantly over the 2000-2005 period in the neighborhoods in and around the urban centers of Bern and Zurich. These improvements are fairly robust in the face of bias due to some unobserved covariate and covary temporally and spatially with the implementation of community policing. The alternative hypothesis that the observed reductions in fear of crime were at least in part a result of community policing interventions thus appears at least as plausible as the null hypothesis of absolutely no effect, even if the observational design cannot completely rule out selection and regression to the mean as alternative explanations.
Resumo:
In an attempt to solve the bridge problem faced by many county engineers, this investigation focused on a low cost bridge alternative that consists of using railroad flatcars (RRFC) as the bridge superstructure. The intent of this study was to determine whether these types of bridges are structurally adequate and potentially feasible for use on low volume roads. A questionnaire was sent to the Bridge Committee members of the American Association of State Highway and Transportation Officials (AASHTO) to determine their use of RRFC bridges and to assess the pros and cons of these bridges based on others’ experiences. It was found that these types of bridges are widely used in many states with large rural populations and they are reported to be a viable bridge alternative due to their low cost, quick and easy installation, and low maintenance. A main focus of this investigation was to study an existing RRFC bridge that is located in Tama County, IA. This bridge was analyzed using computer modeling and field load testing. The dimensions of the major structural members of the flatcars in this bridge were measured and their properties calculated and used in an analytical grillage model. The analytical results were compared with those obtained in the field tests, which involved instrumenting the bridge and loading it with a fully loaded rear tandem-axle truck. Both sets of data (experimental and theoretical) show that the Tama County Bridge (TCB) experienced very low strains and deflections when loaded and the RRFCs appeared to be structurally adequate to serve as a bridge superstructure. A calculated load rating of the TCB agrees with this conclusion. Because many different types of flatcars exist, other flatcars were modeled and analyzed. It was very difficult to obtain the structural plans of RRFCs; thus, only two additional flatcars were analyzed. The results of these analyses also yielded very low strains and displacements. Taking into account the experiences of other states, the inspection of several RRFC bridges in Oklahoma, the field test and computer analysis of the TCB, and the computer analysis of two additional flatcars, RRFC bridges appear to provide a safe and feasible bridge alternative for low volume roads.
Resumo:
A new method to solve the Lorentz-Dirac equation in the presence of an external electromagnetic field is presented. The validity of the approximation is discussed, and the method is applied to a particle in the presence of a constant magnetic field.
Resumo:
We discuss reality conditions and the relation between spacetime diffeomorphisms and gauge transformations in Ashtekars complex formulation of general relativity. We produce a general theoretical framework for the stabilization algorithm for the reality conditions, which is different from Diracs method of stabilization of constraints. We solve the problem of the projectability of the diffeomorphism transformations from configuration-velocity space to phase space, linking them to the reality conditions. We construct the complete set of canonical generators of the gauge group in the phase space which includes all the gauge variables. This result proves that the canonical formalism has all the gauge structure of the Lagrangian theory, including the time diffeomorphisms.
Resumo:
We consider the distribution of cross sections of clusters and the density-density correlation functions for the A+B¿0 reaction. We solve the reaction-diffusion equations numerically for random initial distributions of reactants. When both reactant species have the same diffusion coefficients the distribution of cross sections and the correlation functions scale with the diffusion length and obey superuniversal laws (independent of dimension). For different diffusion coefficients the correlation functions still scale, but the scaling functions depend on the dimension and on the diffusion coefficients. Furthermore, we display explicitly the peculiarities of the cluster-size distribution in one dimension.
Resumo:
The paper deals with the development and application of the generic methodology for automatic processing (mapping and classification) of environmental data. General Regression Neural Network (GRNN) is considered in detail and is proposed as an efficient tool to solve the problem of spatial data mapping (regression). The Probabilistic Neural Network (PNN) is considered as an automatic tool for spatial classifications. The automatic tuning of isotropic and anisotropic GRNN/PNN models using cross-validation procedure is presented. Results are compared with the k-Nearest-Neighbours (k-NN) interpolation algorithm using independent validation data set. Real case studies are based on decision-oriented mapping and classification of radioactively contaminated territories.