963 resultados para Statistics and Probability


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Transportation plays a major role in the gross domestic product of various nations. There are, however, many obstacles hindering the transportation sector. Cost-efficiency along with proper delivery times, high frequency and reliability are not a straightforward task. Furthermore, environmental friendliness has increased the importance of the whole transportation sector. This development will change roles inside the transportation sector. Even now, but especially in the future, decisions regarding the transportation sector will be partly based on emission levels and other externalities originating from transportation in addition to pure transportation costs. There are different factors, which could have an impact on the transportation sector. IMO’s sulphur regulation is estimated to increase the costs of short sea shipping in the Baltic Sea. Price development of energy could change the roles of different transport modes. Higher awareness of the environmental impacts originating from transportation could also have an impact on the price level of more polluting transport modes. According to earlier research, increased inland transportation, modal shift and slowsteaming can be possible results of these changes in the transportation sector. Possible changes in the transportation sector and ways to settle potential obstacles are studied in this dissertation. Furthermore, means to improve cost-efficiency and to decrease environmental impacts originating from transportation are researched. Hypothetical Finnish dry port network and Rail Baltica transport corridor are studied in this dissertation. Benefits and disadvantages are studied with different methodologies. These include gravitational models, which were optimized with linear integer programming, discrete-event and system dynamics simulation, an interview study and a case study. Geographical focus is on the Baltic Sea Region, but the results can be adapted to other geographical locations with discretion. Results indicate that the dry port concept has benefits, but optimization regarding the location and the amount of dry ports plays an important role. In addition, the utilization of dry ports for freight transportation should be carefully operated, since only a certain amount of total freight volume can be cost-efficiently transported through dry ports. If dry ports are created and located without proper planning, they could actually increase transportation costs and delivery times of the whole transportation system. With an optimized dry port network, transportation costs can be lowered in Finland with three to five dry ports. Environmental impacts can be lowered with up to nine dry ports. If more dry ports are added to the system, the benefits become very minor, i.e. payback time of investments becomes extremely long. Furthermore, dry port network could support major transport corridors such as Rail Baltica. Based on an analysis of statistics and interview study, there could be enough freight volume available for Rail Baltica, especially, if North-West Russia is part of the Northern end of the corridor. Transit traffic to and from Russia (especially through the Baltic States) plays a large role. It could be possible to increase transit traffic through Finland by connecting the potential Finnish dry port network and the studied transport corridor. Additionally, sulphur emission regulation is assumed to increase the attractiveness of Rail Baltica in the year 2015. Part of the transit traffic could be rerouted along Rail Baltica instead of the Baltic Sea, since the price level of sea transport could increase due to the sulphur regulation. Both, the hypothetical Finnish dry port network and Rail Baltica transport corridor could benefit each other. The dry port network could gain more market share from Russia, but also from Central Europe, which is the other end of Rail Baltica. In addition, further Eastern countries could also be connected to achieve higher potential freight volume by rail.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this Master’s thesis is to find a method for classifying spare part criticality in the case company. Several approaches exist for criticality classification of spare parts. The practical problem in this thesis is the lack of a generic analysis method for classifying spare parts of proprietary equipment of the case company. In order to find a classification method, a literature review of various analysis methods is required. The requirements of the case company also have to be recognized. This is achieved by consulting professionals in the company. The literature review states that the analytic hierarchy process (AHP) combined with decision tree models is a common method for classifying spare parts in academic literature. Most of the literature discusses spare part criticality in stock holding perspective. This is relevant perspective also for a customer orientated original equipment manufacturer (OEM), as the case company. A decision tree model is developed for classifying spare parts. The decision tree classifies spare parts into five criticality classes according to five criteria. The criteria are: safety risk, availability risk, functional criticality, predictability of failure and probability of failure. The criticality classes describe the level of criticality from non-critical to highly critical. The method is verified for classifying spare parts of a full deposit stripping machine. The classification can be utilized as a generic model for recognizing critical spare parts of other similar equipment, according to which spare part recommendations can be created. Purchase price of an item and equipment criticality were found to have no effect on spare part criticality in this context. Decision tree is recognized as the most suitable method for classifying spare part criticality in the company.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This experimental study examined the effects of cooperative learning and a question-answering strategy called elaborative interrogation ("Why is this fact true?") on the learning of factual information about familiar animals. Retention gains were compared across four study conditions: elaborative-interrogation-plus-cooperative learning, cooperative-learning, elaborative-interrogation, and reading-control. Sixth-grade students (n=68) were randomly assigned to the four conditions. All participants were given initial training and practice in cooperative learning procedures via three 45-minute sessions. After studying 36 facts about six animals, students' retention gains were measured via immediate free recall, immediate matched association, 30-day, and GO-day matched association tests. A priori comparisons were made to analyze the data. For immediate free recall and immediate matched association, significant differences were found between students in the three experimental conditions versus those in the control condition. Elaborative-interrogation and elaborativeinterrogation- plus-cooperative-learning also promoted longterm retention (measured via 30-day matched association) of the material relative to repetitive reading with elaborative-interrogation promoting the most durable gains (measured via GO-day matched association). The relationship between the types of elaborative responses and probability of subsequent retention was also examined. Even when students were unable to provide adequate answers to the why questions, learning was facilitated more so than repetitive reading. In general, generation of adequate elaborations was associated with greater probability of recall than was provision of inadequate answers. The findings of the study demonstrate that cooperative learning and the use of elaborative interrogation, both individually and collaboratively, are effective classroom procedures for facilitating children's learning of new information.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this study was to investigate the learning preferences and the post-secondary educational experiences of a group of Net-Gen adult learners, aged between 18 and 35, currently working in the knowledge economy workplace, and their assessment of how adequately they were prepared to meet the requirements of the knowledge economy workplace. This study utilized an explanatory mixed-method research design. Participants completed a questionnaire providing information on their self-reported learning style preferences, their use of digital tools for formal and informal learning, their use of digital technologies in postsecondary educational experiences, and their use of digital technologies in their workplace. Four volunteers from the questionnaire respondents were selected to participate in interviews based on the diversity of their experiences in higher education, including digital environments, and the diversity of their knowledge economy workplaces. Data collected from the questionnaire were analyzed for descriptive and demographic statistics, and categorized so that common patterns could be identified from information gathered from the online questionnaire and interviews. Findings based on this study indicated that these Net-Gen adult learners were fluent with all types of digital technologies in collaborative environments, expecting their educational experiences to provide a similar experience. Participants clearly expressed an understanding that digital/collaborative aptitudes are essential to successful employment in the knowledge economy workplace. The findings of this study indicated that the majority of participants felt that their post-secondary educational experiences did not adequately prepare them to meet the expectations of this type of working environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Feedback-Related Negativity (FRN) is thought to reflect the dopaminergic prediction error signal from the subcortical areas to the ACC (i.e., a bottom-up signal). Two studies were conducted in order to test a new model of FRN generation, which includes direct modulating influences of medial PFC (i.e., top-down signals) on the ACC at the time of the FRN. Study 1 examined the effects of one’s sense of control (top-down) and of informative cues (bottom-up) on the FRN measures. In Study 2, sense of control and instruction-based (top-down) and probability-based expectations (bottom-up) were manipulated to test the proposed model. The results suggest that any influences of medial PFC on the activity of the ACC that occur in the context of incentive tasks are not direct. The FRN was shown to be sensitive to salient stimulus characteristics. The results of this dissertation partially support the reinforcement learning theory, in that the FRN is a marker for prediction error signal from subcortical areas. However, the pattern of results outlined here suggests that prediction errors are based on salient stimulus characteristics and are not reward specific. A second goal of this dissertation was to examine whether ACC activity, measured through the FRN, is altered in individuals at-risk for problem-gambling behaviour (PG). Individuals in this group were more sensitive to the valence of the outcome in a gambling task compared to not at-risk individuals, suggesting that gambling contexts increase the sensitivity of the reward system to valence of the outcome in individuals at risk for PG. Furthermore, at-risk participants showed an increased sensitivity to reward characteristics and a decreased response to loss outcomes. This contrasts with those not at risk whose FRNs were sensitive to losses. As the results did not replicate previous research showing attenuated FRNs in pathological gamblers, it is likely that the size and time of the FRN does not change gradually with increasing risk of maladaptive behaviour. Instead, changes in ACC activity reflected by the FRN in general can be observed only after behaviour becomes clinically maladaptive or through comparison between different types of gain/loss outcomes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Soil-transmitted helminth (STH) infections are endemic in Honduras, but their prevalence according to the levels of poverty in the population has not been examined. The present cross-sectional study is aimed to determine the role of different levels of poverty in STH prevalence and infection intensity as well as the potential associations of STH infections with malnutrition and anemia. Research participants were children attending a medical brigade serving remote communities in Northern Honduras in June 2014. Demographic data were obtained, and poverty levels were determined using the unsatisfied basic needs method. STH infections were investigated by the Kato-Katz method; hemoglobin concentrations were determined with the HemoCue system; and stunting, thinness, and underweight were determined by anthropometry. Data were analyzed using descriptive statistics and univariate and multivariable logistic regression models. Among 130 children who participated in this study, a high prevalence (69.2%) of parasitism was found and the poorest children were significantly more infected than those living in less poor communities (79.6% vs. 61.8%; P = 0.030). Prevalence rates of Trichuris trichiura, Ascaris lumbricoides, and hookworms were 69.2%, 12.3%, and 3.85%, respectively. In total, 69% of children had anemia and 30% were stunted. Households’ earthen floor and lack of latrines were associated with infection. Greater efforts should be made to reduce STH prevalence and improve overall childhood health, in particular, among the poorest children lacking the basic necessities of life.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

L'un des modèles d'apprentissage non-supervisé générant le plus de recherche active est la machine de Boltzmann --- en particulier la machine de Boltzmann restreinte, ou RBM. Un aspect important de l'entraînement ainsi que l'exploitation d'un tel modèle est la prise d'échantillons. Deux développements récents, la divergence contrastive persistante rapide (FPCD) et le herding, visent à améliorer cet aspect, se concentrant principalement sur le processus d'apprentissage en tant que tel. Notamment, le herding renonce à obtenir un estimé précis des paramètres de la RBM, définissant plutôt une distribution par un système dynamique guidé par les exemples d'entraînement. Nous généralisons ces idées afin d'obtenir des algorithmes permettant d'exploiter la distribution de probabilités définie par une RBM pré-entraînée, par tirage d'échantillons qui en sont représentatifs, et ce sans que l'ensemble d'entraînement ne soit nécessaire. Nous présentons trois méthodes: la pénalisation d'échantillon (basée sur une intuition théorique) ainsi que la FPCD et le herding utilisant des statistiques constantes pour la phase positive. Ces méthodes définissent des systèmes dynamiques produisant des échantillons ayant les statistiques voulues et nous les évaluons à l'aide d'une méthode d'estimation de densité non-paramétrique. Nous montrons que ces méthodes mixent substantiellement mieux que la méthode conventionnelle, l'échantillonnage de Gibbs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

L’explosion récente du nombre de centenaires dans les pays à faible mortalité n’est pas étrangère à la multiplication des études portant sur la longévité, et plus spécifiquement sur ses déterminants et ses répercussions. Alors que certains tentent de découvrir les gènes pouvant être responsables de la longévité extrême, d’autres s’interrogent sur l’impact social, économique et politique du vieillissement de la population et de l’augmentation de l’espérance de vie ou encore, sur l’existence d’une limite biologique à la vie humaine. Dans le cadre de cette thèse, nous analysons la situation démographique des centenaires québécois depuis le début du 20e siècle à partir de données agrégées (données de recensement, statistiques de l’état civil, estimations de population). Dans un deuxième temps, nous évaluons la qualité des données québécoises aux grands âges à partir d’une liste nominative des décès de centenaires des générations 1870-1894. Nous nous intéressons entre autres aux trajectoires de mortalité au-delà de cent ans. Finalement, nous analysons la survie des frères, sœurs et parents d’un échantillon de semi-supercentenaires (105 ans et plus) nés entre 1890 et 1900 afin de se prononcer sur la composante familiale de la longévité. Cette thèse se compose de trois articles. Dans le cadre du premier, nous traitons de l’évolution du nombre de centenaires au Québec depuis les années 1920. Sur la base d’indicateurs démographiques tels le ratio de centenaires, les probabilités de survie et l’âge maximal moyen au décès, nous mettons en lumière les progrès remarquables qui ont été réalisés en matière de survie aux grands âges. Nous procédons également à la décomposition des facteurs responsables de l’augmentation du nombre de centenaires au Québec. Ainsi, au sein des facteurs identifiés, l’augmentation de la probabilité de survie de 80 à 100 ans s’inscrit comme principal déterminant de l’accroissement du nombre de centenaires québécois. Le deuxième article traite de la validation des âges au décès des centenaires des générations 1870-1894 d’origine canadienne-française et de confession catholique nés et décédés au Québec. Au terme de ce processus de validation, nous pouvons affirmer que les données québécoises aux grands âges sont d’excellente qualité. Les trajectoires de mortalité des centenaires basées sur les données brutes s’avèrent donc représentatives de la réalité. L’évolution des quotients de mortalité à partir de 100 ans témoigne de la décélération de la mortalité. Autant chez les hommes que chez les femmes, les quotients de mortalité plafonnent aux alentours de 45%. Finalement, dans le cadre du troisième article, nous nous intéressons à la composante familiale de la longévité. Nous comparons la survie des frères, sœurs et parents des semi-supercentenaires décédés entre 1995 et 2004 à celle de leurs cohortes de naissance respectives. Les différences de survie entre les frères, sœurs et parents des semi-supercentenaires sous observation et leur génération « contrôle » s’avèrent statistiquement significatives à un seuil de 0,01%. De plus, les frères, sœurs, pères et mères des semi-supercentenaires ont entre 1,7 (sœurs) et 3 fois (mères) plus de chance d’atteindre 90 ans que les membres de leur cohorte de naissance correspondante. Ainsi, au terme de ces analyses, il ne fait nul doute que la longévité se concentre au sein de certaines familles.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont des outils très populaires pour l’échantillonnage de lois de probabilité complexes et/ou en grandes dimensions. Étant donné leur facilité d’application, ces méthodes sont largement répandues dans plusieurs communautés scientifiques et bien certainement en statistique, particulièrement en analyse bayésienne. Depuis l’apparition de la première méthode MCMC en 1953, le nombre de ces algorithmes a considérablement augmenté et ce sujet continue d’être une aire de recherche active. Un nouvel algorithme MCMC avec ajustement directionnel a été récemment développé par Bédard et al. (IJSS, 9 :2008) et certaines de ses propriétés restent partiellement méconnues. L’objectif de ce mémoire est de tenter d’établir l’impact d’un paramètre clé de cette méthode sur la performance globale de l’approche. Un second objectif est de comparer cet algorithme à d’autres méthodes MCMC plus versatiles afin de juger de sa performance de façon relative.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction: Il est important de minimiser le gaspillage et les risques associés aux soins sans valeur. La gestion de l’utilisation des antimicrobiens vise à optimiser leur emploi et doit être adaptée au milieu et à sa population. Objectifs: Évaluer les profiles d’utilisation actuels des antimicrobiens et fixer des objectifs pour les interventions en matière de gestion des antimicrobiens. Méthode: Vingt-et-un hôpitaux du Nouveau-Brunswick offrant des soins de courte durée en médecine générale, en chirurgie et en pédiatrie ont pris part à une enquête sur la prévalence ponctuelle. Tous les patients admis aux hôpitaux participants et ayant reçu au moins un antimicrobien systémique ont été inscrits à l’étude. Les principaux critères d’évaluation étaient le profil d’utilisation, selon l’indication et l’antimicrobien prescrit, le bienfondé de l’utilisation et la durée de la prophylaxie chirurgicale. Des statistiques descriptives et un test d’indépendance 2 furent utilisés pour l’analyse de données. Résultats: L’enquête a été menée de juin à août 2012. Un total de 2244 patients ont été admis pendant la durée de l’étude et 529 (23,6%) ont reçu un antimicrobien. Au total, 691 antimicrobiens ont été prescrits, soit 587 (85%) pour le traitement et 104 (15%) pour la prophylaxie. Les antimicrobiens les plus souvent prescrits pour le traitement (n=587) étaient des classes suivantes : quinolones (25,6%), pénicillines à spectre étendu (10,2%) et métronidazole (8,5%). Les indications les plus courantes du traitement étaient la pneumonie (30%), les infections gastro-intestinales (16%) et les infections de la peau et des tissus mous (14%). Selon des critères définis au préalable, 23% (n=134) des ordonnances pour le traitement étaient inappropriées et 20% (n=120) n’avaient aucune indication de documentée. Les domaines où les ordonnances étaient inappropriées étaient les suivants : défaut de passage de la voie intraveineuse à la voie orale (n=34, 6%), mauvaise dose (n=30, 5%), traitement d’une bactériurie asymptomatique (n=24, 4%) et doublement inutile (n=22, 4%). Dans 33% (n=27) des cas, les ordonnances pour la prophylaxie chirurgicale étaient pour une période de plus de 24 heures. Conclusions: Les résultats démontrent que les efforts de gestion des antimicrobiens doivent se concentrer sur les interventions conventionnelles de gestion de l’utilisation des antimicrobiens, l’amélioration de la documentation, l’optimisation de l’utilisation des quinolones et la réduction au minimum de la durée de la prophylaxie chirurgicale.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The service quality of any sector has two major aspects namely technical and functional. Technical quality can be attained by maintaining technical specification as decided by the organization. Functional quality refers to the manner which service is delivered to customer which can be assessed by the customer feed backs. A field survey was conducted based on the management tool SERVQUAL, by designing 28 constructs under 7 dimensions of service quality. Stratified sampling techniques were used to get 336 valid responses and the gap scores of expectations and perceptions are analyzed using statistical techniques to identify the weakest dimension. To assess the technical aspects of availability six months live outage data of base transceiver were collected. The statistical and exploratory techniques were used to model the network performance. The failure patterns have been modeled in competing risk models and probability distribution of service outage and restorations were parameterized. Since the availability of network is a function of the reliability and maintainability of the network elements, any service provider who wishes to keep up their service level agreements on availability should be aware of the variability of these elements and its effects on interactions. The availability variations were studied by designing a discrete time event simulation model with probabilistic input parameters. The probabilistic distribution parameters arrived from live data analysis was used to design experiments to define the availability domain of the network under consideration. The availability domain can be used as a reference for planning and implementing maintenance activities. A new metric is proposed which incorporates a consistency index along with key service parameters that can be used to compare the performance of different service providers. The developed tool can be used for reliability analysis of mobile communication systems and assumes greater significance in the wake of mobile portability facility. It is also possible to have a relative measure of the effectiveness of different service providers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Low grade and High grade Gliomas are tumors that originate in the glial cells. The main challenge in brain tumor diagnosis is whether a tumor is benign or malignant, primary or metastatic and low or high grade. Based on the patient's MRI, a radiologist could not differentiate whether it is a low grade Glioma or a high grade Glioma. Because both of these are almost visually similar, autopsy confirms the diagnosis of low grade with high-grade and infiltrative features. In this paper, textural description of Grade I and grade III Glioma are extracted using First order statistics and Gray Level Co-occurance Matrix Method (GLCM). Textural features are extracted from 16X16 sub image of the segmented Region of Interest(ROI) .In the proposed method, first order statistical features such as contrast, Intensity , Entropy, Kurtosis and spectral energy and GLCM features extracted were showed promising results. The ranges of these first order statistics and GLCM based features extracted are highly discriminant between grade I and Grade III. In this study which gives statistical textural information of grade I and grade III Glioma which is very useful for further classification and analysis and thus assisting Radiologist in greater extent.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The characterization and grading of glioma tumors, via image derived features, for diagnosis, prognosis, and treatment response has been an active research area in medical image computing. This paper presents a novel method for automatic detection and classification of glioma from conventional T2 weighted MR images. Automatic detection of the tumor was established using newly developed method called Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA).Statistical Features were extracted from the detected tumor texture using first order statistics and gray level co-occurrence matrix (GLCM) based second order statistical methods. Statistical significance of the features was determined by t-test and its corresponding p-value. A decision system was developed for the grade detection of glioma using these selected features and its p-value. The detection performance of the decision system was validated using the receiver operating characteristic (ROC) curve. The diagnosis and grading of glioma using this non-invasive method can contribute promising results in medical image computing

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Econometrics is a young science. It developed during the twentieth century in the mid-1930’s, primarily after the World War II. Econometrics is the unification of statistical analysis, economic theory and mathematics. The history of econometrics can be traced to the use of statistical and mathematics analysis in economics. The most prominent contributions during the initial period can be seen in the works of Tinbergen and Frisch, and also that of Haavelmo in the 1940's through the mid 1950's. Right from the rudimentary application of statistics to economic data, like the use of laws of error through the development of least squares by Legendre, Laplace, and Gauss, the discipline of econometrics has later on witnessed the applied works done by Edge worth and Mitchell. A very significant mile stone in its evolution has been the work of Tinbergen, Frisch, and Haavelmo in their development of multiple regression and correlation analysis. They used these techniques to test different economic theories using time series data. In spite of the fact that some predictions based on econometric methodology might have gone wrong, the sound scientific nature of the discipline cannot be ignored by anyone. This is reflected in the economic rationale underlying any econometric model, statistical and mathematical reasoning for the various inferences drawn etc. The relevance of econometrics as an academic discipline assumes high significance in the above context. Because of the inter-disciplinary nature of econometrics (which is a unification of Economics, Statistics and Mathematics), the subject can be taught at all these broad areas, not-withstanding the fact that most often Economics students alone are offered this subject as those of other disciplines might not have adequate Economics background to understand the subject. In fact, even for technical courses (like Engineering), business management courses (like MBA), professional accountancy courses etc. econometrics is quite relevant. More relevant is the case of research students of various social sciences, commerce and management. In the ongoing scenario of globalization and economic deregulation, there is the need to give added thrust to the academic discipline of econometrics in higher education, across various social science streams, commerce, management, professional accountancy etc. Accordingly, the analytical ability of the students can be sharpened and their ability to look into the socio-economic problems with a mathematical approach can be improved, and enabling them to derive scientific inferences and solutions to such problems. The utmost significance of hands-own practical training on the use of computer-based econometric packages, especially at the post-graduate and research levels need to be pointed out here. Mere learning of the econometric methodology or the underlying theories alone would not have much practical utility for the students in their future career, whether in academics, industry, or in practice This paper seeks to trace the historical development of econometrics and study the current status of econometrics as an academic discipline in higher education. Besides, the paper looks into the problems faced by the teachers in teaching econometrics, and those of students in learning the subject including effective application of the methodology in real life situations. Accordingly, the paper offers some meaningful suggestions for effective teaching of econometrics in higher education

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Diese Studie stellt die Entwicklung des privaten Hochschulbereichs im Oman dar und analysiert sie auf die damit verbundenen Erwartungen. Sie untersucht die wesentlichen Herausforderungen, denen sich dieser Sektor zu stellen hat, und formuliert einige Empfehlungen, um die Rolle der privaten Hochschulbildung im Oman zu fördern. Um die Situation in Oman einordnen zu konnen, wurde die Literatur zu Systemen der privaten Hochschulbildung in verschiedenen Ländern vergleichend aufgearbeitet. Der Autor dieser Dissertation hat zudem zahlreiche offizielle Dokumente, Statistiken der Regierung, Berichte, Korrespondenzen und auch unveröffentlichtes Material zum Thema Bildung, Wirtschaft und zur Personalentwicklung geprüft und analysiert. Halb-strukturierte Interviews wurden mit Präsidenten und Dekanen privater Hochschulen sowie mit einigen externen Akteuren durchgeführt, um die Stärken und Schwächen, Herausforderungen und Ziele des privaten Hochschulsektors in Oman zu analysieren.