954 resultados para Model of the semantic fields
Resumo:
Stemmatology, or the reconstruction of the transmission history of texts, is a field that stands particularly to gain from digital methods. Many scholars already take stemmatic approaches that rely heavily on computational analysis of the collated text (e.g. Robinson and O’Hara 1996; Salemans 2000; Heikkilä 2005; Windram et al. 2008 among many others). Although there is great value in computationally assisted stemmatology, providing as it does a reproducible result and allowing access to the relevant methodological process in related fields such as evolutionary biology, computational stemmatics is not without its critics. The current state-of-the-art effectively forces scholars to choose between a preconceived judgment of the significance of textual differences (the Lachmannian or neo-Lachmannian approach, and the weighted phylogenetic approach) or to make no judgment at all (the unweighted phylogenetic approach). Some basis for judgment of the significance of variation is sorely needed for medieval text criticism in particular. By this, we mean that there is a need for a statistical empirical profile of the text-genealogical significance of the different sorts of variation in different sorts of medieval texts. The rules that apply to copies of Greek and Latin classics may not apply to copies of medieval Dutch story collections; the practices of copying authoritative texts such as the Bible will most likely have been different from the practices of copying the Lives of local saints and other commonly adapted texts. It is nevertheless imperative that we have a consistent, flexible, and analytically tractable model for capturing these phenomena of transmission. In this article, we present a computational model that captures most of the phenomena of text variation, and a method for analysis of one or more stemma hypotheses against the variation model. We apply this method to three ‘artificial traditions’ (i.e. texts copied under laboratory conditions by scholars to study the properties of text variation) and four genuine medieval traditions whose transmission history is known or deduced in varying degrees. Although our findings are necessarily limited by the small number of texts at our disposal, we demonstrate here some of the wide variety of calculations that can be made using our model. Certain of our results call sharply into question the utility of excluding ‘trivial’ variation such as orthographic and spelling changes from stemmatic analysis.
Resumo:
Simulating surface wind over complex terrain is a challenge in regional climate modelling. Therefore, this study aims at identifying a set-up of the Weather Research and Forecasting Model (WRF) model that minimises system- atic errors of surface winds in hindcast simulations. Major factors of the model configuration are tested to find a suitable set-up: the horizontal resolution, the planetary boundary layer (PBL) parameterisation scheme and the way the WRF is nested to the driving data set. Hence, a number of sensitivity simulations at a spatial resolution of 2 km are carried out and compared to observations. Given the importance of wind storms, the analysis is based on case studies of 24 historical wind storms that caused great economic damage in Switzerland. Each of these events is downscaled using eight different model set-ups, but sharing the same driving data set. The results show that the lack of representation of the unresolved topography leads to a general overestimation of wind speed in WRF. However, this bias can be substantially reduced by using a PBL scheme that explicitly considers the effects of non-resolved topography, which also improves the spatial structure of wind speed over Switzerland. The wind direction, although generally well reproduced, is not very sensitive to the PBL scheme. Further sensitivity tests include four types of nesting methods: nesting only at the boundaries of the outermost domain, analysis nudging, spectral nudging, and the so-called re-forecast method, where the simulation is frequently restarted. These simulations show that restricting the freedom of the model to develop large-scale disturbances slightly increases the temporal agreement with the observations, at the same time that it further reduces the overestimation of wind speed, especially for maximum wind peaks. The model performance is also evaluated in the outermost domains, where the resolution is coarser. The results demonstrate the important role of horizontal resolution, where the step from 6 to 2 km significantly improves model performance. In summary, the combination of a grid size of 2 km, the non-local PBL scheme modified to explicitly account for non-resolved orography, as well as analysis or spectral nudging, is a superior combination when dynamical downscaling is aimed at reproducing real wind fields.
Resumo:
We investigate the transition from unitary to dissipative dynamics in the relativistic O(N) vector model with the λ(φ2)2 interaction using the nonperturbative functional renormalization group in the real-time formalism. In thermal equilibrium, the theory is characterized by two scales, the interaction range for coherent scattering of particles and the mean free path determined by the rate of incoherent collisions with excitations in the thermal medium. Their competition determines the renormalization group flow and the effective dynamics of the model. Here we quantify the dynamic properties of the model in terms of the scale-dependent dynamic critical exponent z in the limit of large temperatures and in 2≤d≤4 spatial dimensions. We contrast our results to the behavior expected at vanishing temperature and address the question of the appropriate dynamic universality class for the given microscopic theory.
New fully kinetic model for the study of electric potential, plasma, and dust above lunar landscapes
Resumo:
We have developed a new fully kinetic electrostatic simulation, HYBes, to study how the lunar landscape affects the electric potential and plasma distributions near the surface and the properties of lifted dust. The model embodies new techniques that can be used in various types of physical environments and situations. We demonstrate the applicability of the new model in a situation involving three charged particle species, which are solar wind electrons and protons, and lunar photoelectrons. Properties of dust are studied with test particle simulations by using the electric fields derived from the HYBes model. Simulations show the high importance of the plasma and the electric potential near the surface. For comparison, the electric potential gradients near the landscapes with feature sizes of the order of the Debye length are much larger than those near a flat surface at different solar zenith angles. Furthermore, dust test particle simulations indicate that the landscape relief influences the dust location over the surface. The study suggests that the local landscape has to be taken into account when the distributions of plasma and dust above lunar surface are studied. The HYBes model can be applied not only at the Moon but also on a wide range of airless planetary objects such as Mercury, other planetary moons, asteroids, and nonactive comets.
Resumo:
Interpretation of ice-core records requires accurate knowledge of the past and present surface topography and stress-strain fields. The European Project for Ice Coring in Antarctica (EPICA) drilling site (0.0684° E and 75.0025° S, 2891.7 m) in Dronning Maud Land, Antarctica, is located in the immediate vicinity of a transient and splitting ice divide. A digital elevation model is determined from the combination of kinematic GPS measurements with the GLAS12 data sets from the ICESat satellite. Based on a network of stakes, surveyed with static GPS, the velocity field around the EDML drilling site is calculated. The annual mean velocity magnitude of 12 survey points amounts to 0.74 m/a. Flow directions mainly vary according to their distance from the ice divide. Surface strain rates are determined from a pentagon-shaped stake network with one center point, close to the drilling site. The strain field is characterised by along flow compression, lateral dilatation, and vertical layer thinning.
Resumo:
Optimum conditions were selected for chromatographic separation of model mixtures of C12-C40 n-alkanes. For one of samples of hydrothermal deposits extraction conditions of hydrocarbons were studied and a sample preparation procedure was selected. The procedure was proposed to determine n-alkanes in samples of hydrothermal deposits by means of gas chromatography - mass spectrometry (GC-MS). Detection limit for n-alkanes was 3x10**-9 to 10**-8% depending on components. On the basis of the proposed procedure composition of n-alkanes was studied in samples of hydrothermal deposits collected at the Mid-Atlantic Ridge (Broken Spur, Lost City, and Rainbow hydrothermal fields). Analyses showed that samples contained C14-C35 n-alkanes. Concentrations of the n-alkanes were rather low and varied from 0.002 to 0.038 µg/g. Hypotheses concerning genesis of identified n-alkanes were offered.
Resumo:
This study shows the air flow behavior through the geometry of a freight truck inside a AF6109 wind tunnel with the purpose to predict the speed, pressure and turbulence fields made by the air flow, to decrease the aerodynamic resistance, to calculate the dragging coefficient, to evaluate the aerodynamics of the geometry of the prototype using the CFD technique and to compare the results of the simulation with the results obtained experimentally with the “PETER 739 HAULER” scaled freight truck model located on the floor of the test chamber. The Geometry went through a numerical simulation process using the CFX 5,7. The obtained results showed the behavior of the air flow through the test chamber, and also it showed the variations of speed and pressure at the exit of the chamber and the calculations of the coefficient and the dragging force on the geometry of the freight truck. The evaluation of the aerodynamics showed that the aerodynamic deflector is a device that helped the reduction the dragging produced in a significant way by the air. Furthermore, the dragging coefficient and force on the prototype freight truck could be estimated establishing an incomplete similarity.
Resumo:
PURPOSE The decision-making process plays a key role in organizations. Every decision-making process produces a final choice that may or may not prompt action. Recurrently, decision makers find themselves in the dichotomous question of following a traditional sequence decision-making process where the output of a decision is used as the input of the next stage of the decision, or following a joint decision-making approach where several decisions are taken simultaneously. The implication of the decision-making process will impact different players of the organization. The choice of the decision- making approach becomes difficult to find, even with the current literature and practitioners’ knowledge. The pursuit of better ways for making decisions has been a common goal for academics and practitioners. Management scientists use different techniques and approaches to improve different types of decisions. The purpose of this decision is to use the available resources as well as possible (data and techniques) to achieve the objectives of the organization. The developing and applying of models and concepts may be helpful to solve managerial problems faced every day in different companies. As a result of this research different decision models are presented to contribute to the body of knowledge of management science. The first models are focused on the manufacturing industry and the second part of the models on the health care industry. Despite these models being case specific, they serve the purpose of exemplifying that different approaches to the problems and could provide interesting results. Unfortunately, there is no universal recipe that could be applied to all the problems. Furthermore, the same model could deliver good results with certain data and bad results for other data. A framework to analyse the data before selecting the model to be used is presented and tested in the models developed to exemplify the ideas. METHODOLOGY As the first step of the research a systematic literature review on the joint decision is presented, as are the different opinions and suggestions of different scholars. For the next stage of the thesis, the decision-making process of more than 50 companies was analysed in companies from different sectors in the production planning area at the Job Shop level. The data was obtained using surveys and face-to-face interviews. The following part of the research into the decision-making process was held in two application fields that are highly relevant for our society; manufacturing and health care. The first step was to study the interactions and develop a mathematical model for the replenishment of the car assembly where the problem of “Vehicle routing problem and Inventory” were combined. The next step was to add the scheduling or car production (car sequencing) decision and use some metaheuristics such as ant colony and genetic algorithms to measure if the behaviour is kept up with different case size problems. A similar approach is presented in a production of semiconductors and aviation parts, where a hoist has to change from one station to another to deal with the work, and a jobs schedule has to be done. However, for this problem simulation was used for experimentation. In parallel, the scheduling of operating rooms was studied. Surgeries were allocated to surgeons and the scheduling of operating rooms was analysed. The first part of the research was done in a Teaching hospital, and for the second part the interaction of uncertainty was added. Once the previous problem had been analysed a general framework to characterize the instance was built. In the final chapter a general conclusion is presented. FINDINGS AND PRACTICAL IMPLICATIONS The first part of the contributions is an update of the decision-making literature review. Also an analysis of the possible savings resulting from a change in the decision process is made. Then, the results of the survey, which present a lack of consistency between what the managers believe and the reality of the integration of their decisions. In the next stage of the thesis, a contribution to the body of knowledge of the operation research, with the joint solution of the replenishment, sequencing and inventory problem in the assembly line is made, together with a parallel work with the operating rooms scheduling where different solutions approaches are presented. In addition to the contribution of the solving methods, with the use of different techniques, the main contribution is the framework that is proposed to pre-evaluate the problem before thinking of the techniques to solve it. However, there is no straightforward answer as to whether it is better to have joint or sequential solutions. Following the proposed framework with the evaluation of factors such as the flexibility of the answer, the number of actors, and the tightness of the data, give us important hints as to the most suitable direction to take to tackle the problem. RESEARCH LIMITATIONS AND AVENUES FOR FUTURE RESEARCH In the first part of the work it was really complicated to calculate the possible savings of different projects, since in many papers these quantities are not reported or the impact is based on non-quantifiable benefits. The other issue is the confidentiality of many projects where the data cannot be presented. For the car assembly line problem more computational power would allow us to solve bigger instances. For the operation research problem there was a lack of historical data to perform a parallel analysis in the teaching hospital. In order to keep testing the decision framework it is necessary to keep applying more case studies in order to generalize the results and make them more evident and less ambiguous. The health care field offers great opportunities since despite the recent awareness of the need to improve the decision-making process there are many opportunities to improve. Another big difference with the automotive industry is that the last improvements are not spread among all the actors. Therefore, in the future this research will focus more on the collaboration between academia and the health care sector.
Resumo:
A large fraction of Gamma-ray bursts (GRBs) displays an X-ray plateau phase within <105 s from the prompt emission, proposed to be powered by the spin-down energy of a rapidly spinning newly born magnetar. In this work we use the properties of the Galactic neutron star population to constrain the GRB-magnetar scenario. We re-analyze the X-ray plateaus of all Swift GRBs with known redshift, between 2005 January and 2014 August. From the derived initial magnetic field distribution for the possible magnetars left behind by the GRBs, we study the evolution and properties of a simulated GRB-magnetar population using numerical simulations of magnetic field evolution, coupled with Monte Carlo simulations of Pulsar Population Synthesis in our Galaxy. We find that if the GRB X-ray plateaus are powered by the rotational energy of a newly formed magnetar, the current observational properties of the Galactic magnetar population are not compatible with being formed within the GRB scenario (regardless of the GRB type or rate at z = 0). Direct consequences would be that we should allow the existence of magnetars and "super-magnetars" having different progenitors, and that Type Ib/c SNe related to Long GRBs form systematically neutron stars with higher initial magnetic fields. We put an upper limit of ≤16 "super-magnetars" formed by a GRB in our Galaxy in the past Myr (at 99% c.l.). This limit is somewhat smaller than what is roughly expected from Long GRB rates, although the very large uncertainties do not allow us to draw strong conclusion in this respect.
Resumo:
The XXZ Gaudin model with generic integrable boundaries specified by generic non-diagonal K-matrices is studied. The commuting families of Gaudin operators are diagonalized by the algebraic Bethe ansatz method. The eigenvalues and the corresponding Bethe ansatz equations are obtained. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Semantic data models provide a map of the components of an information system. The characteristics of these models affect their usefulness for various tasks (e.g., information retrieval). The quality of information retrieval has obvious important consequences, both economic and otherwise. Traditionally, data base designers have produced parsimonious logical data models. In spite of their increased size, ontologically clearer conceptual models have been shown to facilitate better performance for both problem solving and information retrieval tasks in experimental settings. The experiments producing evidence of enhanced performance for ontologically clearer models have, however, used application domains of modest size. Data models in organizational settings are likely to be substantially larger than those used in these experiments. This research used an experiment to investigate whether the benefits of improved information retrieval performance associated with ontologically clearer models are robust as the size of the application domains increase. The experiment used an application domain of approximately twice the size as tested in prior experiments. The results indicate that, relative to the users of the parsimonious implementation, end users of the ontologically clearer implementation made significantly more semantic errors, took significantly more time to compose their queries, and were significantly less confident in the accuracy of their queries.
Resumo:
We describe a template model for perception of edge blur and identify a crucial early nonlinearity in this process. The main principle is to spatially filter the edge image to produce a 'signature', and then find which of a set of templates best fits that signature. Psychophysical blur-matching data strongly support the use of a second-derivative signature, coupled to Gaussian first-derivative templates. The spatial scale of the best-fitting template signals the edge blur. This model predicts blur-matching data accurately for a wide variety of Gaussian and non-Gaussian edges, but it suffers a bias when edges of opposite sign come close together in sine-wave gratings and other periodic images. This anomaly suggests a second general principle: the region of an image that 'belongs' to a given edge should have a consistent sign or direction of luminance gradient. Segmentation of the gradient profile into regions of common sign is achieved by implementing the second-derivative 'signature' operator as two first-derivative operators separated by a half-wave rectifier. This multiscale system of nonlinear filters predicts perceived blur accurately for periodic and aperiodic waveforms. We also outline its extension to 2-D images and infer the 2-D shape of the receptive fields.
Resumo:
This thesis presents a new approach to designing large organizational databases. The approach emphasizes the need for a holistic approach to the design process. The development of the proposed approach was based on a comprehensive examination of the issues of relevance to the design and utilization of databases. Such issues include conceptual modelling, organization theory, and semantic theory. The conceptual modelling approach presented in this thesis is developed over three design stages, or model perspectives. In the semantic perspective, concept definitions were developed based on established semantic principles. Such definitions rely on meaning - provided by intension and extension - to determine intrinsic conceptual definitions. A tool, called meaning-based classification (MBC), is devised to classify concepts based on meaning. Concept classes are then integrated using concept definitions and a set of semantic relations which rely on concept content and form. In the application perspective, relationships are semantically defined according to the application environment. Relationship definitions include explicit relationship properties and constraints. The organization perspective introduces a new set of relations specifically developed to maintain conformity of conceptual abstractions with the nature of information abstractions implied by user requirements throughout the organization. Such relations are based on the stratification of work hierarchies, defined elsewhere in the thesis. Finally, an example of an application of the proposed approach is presented to illustrate the applicability and practicality of the modelling approach.
Resumo:
Despite the difficulties that we have regarding the use of English in tertiary education in Turkey, we argue that it is necessary for those involved to study in the medium of English. Furthermore, significant advances have been made on this front. These efforts have been for the most part language-oriented, but also include research into needs analysis and the pedagogy of team-teaching. Considering the current situation at this level of education, however, there still seems to be more to do. And the question is, what more can we do? What further contribution can we make? Or, how can we take this process further? The purpose of the study reported here is to respond to this last question. We test the proposition that it is possible to take this process further by investigating the efficient management of transition from Turkish-medium to English-medium at the tertiary level of education in Turkey. Beyond what is achieved by only the language orientation of the EAP approach, and moving conceptually deeper than what has been achieved by the team-teaching approach, the research undertaken for the purpose of this study focuses on the idea of the discourse community that people want to belong to. It then pursues an adaptation of the essentially psycho-social approach of apprenticeship, as people become aspirants and apprentices to that discourse community. In this thesis, the researcher recognises that she cannot follow all the way through to the full implementation of her ideas in a fully-taught course. She is not in a position to change the education system. What she does here is to introduce a concept and sample its effects in terms of motivation, and thereby of integration and of success, for individuals and groups of learners. Evaluation is provided by acquiring both qualitative and quantitative data concerning mature members' perceptions of apprenticed-neophytes functioning as members in the new community, apprenticed-neophytes' perceptions of their own membership and of the preparation process undertaken, and the comparison of these neophytes' performance with that of other neophytes in the community. The data obtained provide strong evidence in support of the potential usefulness of this apprenticeship model towards the declared purpose of improving the English-medium tertiary education of Turkish students in their chosen fields of study.
Resumo:
The Ellison Executive Mentoring Inclusive Community Building (ICB) Model is a paradigm for initiating and implementing projects utilizing executives and professionals from a variety of fields and industries, university students, and pre-college students. The model emphasizes adherence to ethical values and promotes inclusiveness in community development. It is a hierarchical model in which actors in each succeeding level of operation serve as mentors to the next. Through a three-step process—content, process, and product—participants must be trained with this mentoring and apprenticeship paradigm in conflict resolution, and they receive sensitivity and diversity training through an interactive and dramatic exposition. ^ The content phase introduces participants to the model's philosophy, ethics, values and methods of operation. The process used to teach and reinforce its precepts is the mentoring and apprenticeship activities and projects in which the participants engage and whose end product demonstrates their knowledge and understanding of the model's concepts. This study sought to ascertain from the participants' perspectives whether the model's mentoring approach is an effective means of fostering inclusiveness, based upon their own experiences in using it. The research utilized a qualitative approach and included data from field observations, individual and group interviews, and written accounts of participants' attitudes. ^ Participants complete ICB projects utilizing The Ellison Model as a method of development and implementation. They generally perceive that the model is a viable tool for dealing with diversity issues whether at work, at school, or at home. The projects are also instructional in that whether participants are mentored or serve as apprentices, they gain useful skills and knowledge about their careers. Since the model is relatively new, there is ample room for research in a variety of areas including organizational studies to determine its effectiveness in combating problems related to various kinds of discrimination. ^