983 resultados para Software Packages


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Quantitative measures of polygon shapes and orientation are important elements of geospatial analysis. These kinds of measures are particularly valuable in the case of lakes, where shape and orientation patterns can help identifying the geomorphological agents behind lake formation and evolution. However, the lack of built-in tools in commercial geographic information system (GIS) software packages designed for this kind of analysis has meant that many researchers often must rely on tools and workarounds that are not always accurate. Here, an easy-to-use method to measure rectangularity R, ellipticity E, and orientation O is developed. In addition, a new rectangularity vs. ellipticity index, REi, is defined. Following a step-by-step process, it is shown how these measures and index can be easily calculated using a combination of GIS built-in functions. The identification of shapes and estimation of orientations performed by this method is applied to the case study of the geometric and oriented lakes of the Llanos de Moxos, in the Bolivian Amazon, where shape and orientation have been the two most important elements studied to infer possible formation mechanisms. It is shown that, thanks to these new indexes, shape and orientation patterns are unveiled, which would have been hard to identify otherwise.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Microsoft Project is one of the most-widely used software packages for project management. For the scheduling of resource-constrained projects, the package applies a priority-based procedure using a specific schedule-generation scheme. This procedure performs relatively poorly when compared against other software packages or state-of-the-art methods for resource-constrained project scheduling. In Microsoft Project 2010, it is possible to work with schedules that are infeasible with respect to the precedence or the resource constraints. We propose a novel schedule-generation scheme that makes use of this possibility. Under this scheme, the project tasks are scheduled sequentially while taking into account all temporal and resource constraints that a user can define within Microsoft Project. The scheme can be implemented as a priority-rule based heuristic procedure. Our computational results for two real-world construction projects indicate that this procedure outperforms the built-in procedure of Microsoft Project

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most commercial project management software packages include planning methods to devise schedules for resource-constrained projects. As it is proprietary information of the software vendors which planning methods are implemented, the question arises how the software packages differ in quality with respect to their resource-allocation capabilities. We experimentally evaluate the resource-allocation capabilities of eight recent software packages by using 1,560 instances with 30, 60, and 120 activities of the well-known PSPLIB library. In some of the analyzed packages, the user may influence the resource allocation by means of multi-level priority rules, whereas in other packages, only few options can be chosen. We study the impact of various complexity parameters and priority rules on the project duration obtained by the software packages. The results indicate that the resource-allocation capabilities of these packages differ significantly. In general, the relative gap between the packages gets larger with increasing resource scarcity and with increasing number of activities. Moreover, the selection of the priority rule has a considerable impact on the project duration. Surprisingly, when selecting a priority rule in the packages where it is possible, both the mean and the variance of the project duration are in general worse than for the packages which do not offer the selection of a priority rule.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over 250 Mendelian traits and disorders, caused by rare alleles have been mapped in the canine genome. Although each disease is rare in the dog as a species, they are collectively common and have major impact on canine health. With SNP-based genotyping arrays, genome-wide association studies (GWAS) have proven to be a powerful method to map the genomic region of interest when 10-20 cases and 10-20 controls are available. However, to identify the genetic variant in associated regions, fine-mapping and targeted re-sequencing is required. Here we present a new approach using whole-genome sequencing (WGS) of a family trio without prior GWAS. As a proof-of-concept, we chose an autosomal recessive disease known as hereditary footpad hyperkeratosis (HFH) in Kromfohrl änder dogs. To our knowledge, this is the first time this family trio WGS-approach, has successfully been used to identify a genetic variant that perfectly segregates with a canine disorder. The sequencing of three Kromfohrl änder dogs from a family trio (an affected offspring and both its healthy parents) resulted in an average genome coverage of 9.2X per individual. After applying stringent filtering criteria for candidate causative coding variants, 527 single nucleotide variants (SNVs) and 15 indels were found to be homozygous in the affected offspring and heterozygous in the parents. Using the computer software packages ANNOVAR and SIFT to functionally annotate coding sequence differences and to predict their functional effect, resulted in seven candidate variants located in six different genes. Of these, only FAM83G:c155G>C (p.R52P) was found to be concordant in eight additional cases and 16 healthy Kromfohrl änder dogs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Rare event search experiments using liquid xenon as target and detection medium require ultra-low background levels to fully exploit their physics potential. Cosmic ray induced activation of the detector components and, even more importantly, of the xenon itself during production, transportation and storage at the Earth's surface, might result in the production of radioactive isotopes with long half-lives, with a possible impact on the expected background. We present the first dedicated study on the cosmogenic activation of xenon after 345 days of exposure to cosmic rays at the Jungfraujoch research station at 3470m above sea level, complemented by a study of copper which has been activated simultaneously. We have directly observed the production of 7Be, 101Rh, 125Sb, 126I and 127Xe in xenon, out of which only 125Sb could potentially lead to background for a multi-ton scale dark matter search. The production rates for five out of eight studied radioactive isotopes in copper are in agreement with the only existing dedicated activation measurement, while we observe lower rates for the remaining ones. The specific saturation activities for both samples are also compared to predictions obtained with commonly used software packages, where we observe some underpredictions, especially for xenon activation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Web-based education or „e-learning‟ has become a critical component in higher education for the last decade, replacing other distance learning methods, such as traditional computer training or correspondence learning. The number of university students who take on-line courses is continuously increasing all over the world. In Spain, nearly a 90% of the universities have an institutional e-learning platform and over 60% of the traditional on-site courses use this technology as a supplement to the traditional face-to-face classes. This new form of learning allows the disappearance of geographical barriers and enables students to schedule their own learning process, among some other advantages. On-line education is developed through specific software called „e-learning platform‟ or „virtual learning environment‟ (VLE). A considerable number of web-based tools to deliver distance courses are currently available. Open source software packages such as Moodle, Sakai, dotLRN or Dokeos are the most commonly used in the virtual campuses of Spanish universities. This paper analyzes the possibilities that virtual learning environments provide university teachers and learners and offers a technical comparison among some of the most popular e-learning learning platforms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El tiempo de concentración de una cuenca sigue siendo relativamente desconocido para los ingenieros. El procedimiento habitual en un estudio hidrológico es calcularlo según varias fórmulas escogidas entre las existentes para después emplear el valor medio obtenido. De esta media se derivan los demás resultados hidrológicos, resultados que influirán en el futuro dimensionamiento de las infraestructuras. Este trabajo de investigación comenzó con el deseo de conseguir un método más fiable y objetivo que permitiera obtener el tiempo de concentración. Dada la imposibilidad de poner en práctica ensayos hidrológicos en una cuenca física real, ya que no resulta viable monitorizar perfectamente la precipitación ni los caudales de salida, se planteó llevar a cabo los ensayos de forma simulada, con el empleo de modelos hidráulicos bidimensionales de lluvia directa sobre malla 2D de volúmenes finitos. De entre todos los disponibles, se escogió InfoWorks ICM, por su rapidez y facilidad de uso. En una primera fase se efectuó la validación del modelo hidráulico elegido, contrastando los resultados de varias simulaciones con la formulación analítica existente. Posteriormente, se comprobaron los valores de los tiempos de concentración obtenidos con las expresiones referenciadas en la bibliografía, consiguiéndose resultados muy satisfactorios. Una vez verificado, se ejecutaron 690 simulaciones de cuencas tanto naturales como sintéticas, incorporando variaciones de área, pendiente, rugosidad, intensidad y duración de las precipitaciones, a fin de obtener sus tiempos de concentración y retardo. Esta labor se realizó con ayuda de la aceleración del cálculo vectorial que ofrece la tecnología CUDA (Arquitectura Unificada de Dispositivos de Cálculo). Basándose en el análisis dimensional, se agruparon los resultados del tiempo de concentración en monomios adimensionales. Utilizando regresión lineal múltiple, se obtuvo una nueva formulación para el tiempo de concentración. La nueva expresión se contrastó con las formulaciones clásicas, habiéndose obtenido resultados equivalentes. Con la exposición de esta nueva metodología se pretende ayudar al ingeniero en la realización de estudios hidrológicos. Primero porque proporciona datos de manera sencilla y objetiva que se pueden emplear en modelos globales como HEC-HMS. Y segundo porque en sí misma se ha comprobado como una alternativa realmente válida a la metodología hidrológica habitual. Time of concentration remains still fairly imprecise to engineers. A normal hydrological study goes through several formulae, obtaining concentration time as the median value. Most of the remaining hydrologic results will be derived from this value. Those results will determine how future infrastructures will be designed. This research began with the aim to acquire a more reliable and objective method to estimate concentration times. Given the impossibility of carrying out hydrological tests in a real watershed, due to the difficulties related to accurate monitoring of rainfall and derived outflows, a model-based approach was proposed using bidimensional hydraulic simulations of direct rainfall over a 2D finite-volume mesh. Amongst all of the available software packages, InfoWorks ICM was chosen for its speed and ease of use. As a preliminary phase, the selected hydraulic model was validated, checking the outcomes of several simulations over existing analytical formulae. Next, concentration time values were compared to those resulting from expressions referenced in the technical literature. They proved highly satisfactory. Once the model was properly verified, 690 simulations of both natural and synthetic basins were performed, incorporating variations of area, slope, roughness, intensity and duration of rainfall, in order to obtain their concentration and lag times. This job was carried out in a reasonable time lapse with the aid of the parallel computing platform technology CUDA (Compute Unified Device Architecture). Performing dimensional analysis, concentration time results were isolated in dimensionless monomials. Afterwards, a new formulation for the time of concentration was obtained using multiple linear regression. This new expression was checked against classical formulations, obtaining equivalent results. The publication of this new methodology intends to further assist the engineer while carrying out hydrological studies. It is effective to provide global parameters that will feed global models as HEC-HMS on a simple and objective way. It has also been proven as a solid alternative to usual hydrology methodology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta investigación se enmarca dentro de los denominados lenguajes de especialidad que para esta tesis será el de las Tecnologías de la Información y la Comunicación (TIC). De todos los aspectos relacionados con el estudio de estos lenguajes que pudieran tener interés lingüístico ha primado el análisis del componente terminológico. Tradicionalmente la conceptualización de un campo del saber se representaba mayoritariamente a través del elemento nominal, así lo defiende la Teoría General de la Terminología (Wüster, 1968). Tanto la lexicología como la lexicografía han aportado importantes contribuciones a los estudios terminológicos para la identificación del componente léxico a través del cual se transmite la información especializada. No obstante esos primeros estudios terminológicos que apuntaban al sustantivo como elmentos denominativo-conceptual, otras teorías más recientes, entre las que destacamos la Teoría Comunicativa de la Terminología (Cabré, 1999) identifican otras estructuras morfosintácticas integradas por otros elementos no nominales portadores igualmente de esa carga conceptual. A partir de esta consideración, hemos seleccionado para este estudio el adjetivo relacional en tanto que representa otra categoría gramatical distinta al sustantivo y mantiene un vínculo con éste debido a su procedencia. Todo lo cual puede suscitar cierto interés terminológico. A través de esta investigación, nos hemos propuesto demostrar las siguientes hipótesis: 1. El adjetivo relacional aporta contenido especializado en su asociación con el componente nominal. 2. El adjetivo relacional es portador de un valor semántico que hace posible identificar con más precisión la relación conceptual de los elementos -adjetivo y sustantivo - de la combinación léxica resultante, especialmente en algunas formaciones ambiguas. 3. El adjetivo relacional, como modificador natural del sustantivo al que acompaña, podría imponer cierta restricción en sus combinaciones y, por tanto, hacer una selección discriminada de los integrantes de la combinación léxica especializada. Teniendo en cuenta las anteriores hipótesis, esta investigación ha delimitado y caracterizado el segmento léxico objeto de estudio: la ‘combinación léxica especializada (CLE)’ formalmente representada por la estructura sintáctica [adjR+n], en donde adjR es el adjetivo y n el sustantivo al que acompaña. De igual forma hemos descrito el marco teórico desde el que abordar nuestro análisis. Se trata de la teoría del Lexicón Generatvio (LG) y de la representación semántica (Pustojovsky, 1995) que propone como explicación de la generación de significados. Hemos analizado las distintas estructuras de representación léxica y en especial la estructura qualia a través de la cual hemos identificado la relación semántica que mantienen los dos ítems léxicos [adjR+n] de la estructura sintáctica de nuestro estudio. El estudio semántico de las dos piezas léxicas ha permitido, además, comprobar el valor denominativo del adjetivo en la combinación. Ha sido necesario elaborar un corpus de textos escritos en inglés y español pertenecientes al discurso de especialidad de las TIC. Este material ha sido procesado para nuestros fines utilizando distintas herramientas electrónicas. Se ha hecho uso de lexicones electrónicos, diccionarios online generales y de especialidad y corpus de referencia online, estos últimos para poder eventualmente validad nuetros datos. Asimismo se han utilizado motores de búsqueda, entre ellos WordNet Search 3.1, para obtener la información semántica de nuestros elementos léxicos. Nuestras conclusiones han corroborado las hipótesis que se planteaban en esta tesis, en especial la referente al valor denominativo-conceptual del adjetivo relacional el cual, junto con el sustantivo al que acompaña, forma parte de la representación cognitiva del lenguaje de especialidad de las TIC. Como continuación a este estudio se proponen sugerencias sobre líneas futuras de investigación así como el diseño de herramientas informáticas que pudieran incorporar estos datos semánticos como complemento de los ítems léxicos dotados de valor denominativo-conceptual. ABSTRACT This research falls within the field of the so-called Specialized Languages which for the purpose of this study is the Information and Communication Technology (ICT) discourse. Considering their several distinguishing features terminology concentrates our interest from the point of view of linguistics. It is broadly assumed that terms represent concepts of a subject field. For the classical view of terminology (Wüster, 1968) these terms are formally represented by nouns. Both lexicology and terminology have made significant contributions to the study of terms. Later research as well as other theories on Terminology such as the Communicative Theory of Terminology (Cabré, 1993) have shown that other lexical units can also represent knowledge organization. On these bases, we have focused our research on the relational adjective which represents a functional unit different from a noun while still connected to the noun by means of its nominal root. This may have a potential terminological interest. Therefore the present research is based on the next hypotheses: 1. The relational adjective conveys specialized information when combined with the noun. 2. The relational adjective has a semantic meaning which helps understand the conceptual relationship between the adjective and the noun being modified and disambiguate certain senses of the resulting lexical combination. 3. The relational adjective may impose some restrictions when choosing the nouns it modifies. Considering the above hypotheses, this study has identified and described a multi-word lexical unit pattern [Radj+n] referred to as a Specialized Lexical Combination (SLC) linguistically realized by a relational adjective, Radj, and a noun, n. The analysis of such a syntactic pattern is addressed from the framework of the Generative Lexicon (Pustojovsky, 1995). Such theory provides several levels of semantic description which help lexical decomposition performed generatively. These levels of semantic representation are connected through generative operations or generative devices which account for the compositional interpretation of any linguistic utterance in a given context. This study analyses these different levels and focuses on one of them, i.e. the qualia structure since it may encode the conceptual meaning of the syntactic pattern [Radj+n]. The semantic study of these two lexical items has ultimately confirmed the conceptual meaning of the relational adjective. A corpus made of online ICT articles from magazines written in English and Spanish – some being their translations - has been used for the word extraction. For this purpose some word processing software packages have been employed. Moreover online general language and specialized language dictionaries have been consulted. Search engines, namely WordNet Search 3.1, have been also exploited to find the semantic information of our lexical units. Online reference corpora in English and Spanish have been used for a contrastive analysis of our data. Finally our conclusions have confirmed our initial hypotheses, i.e. relational adjectives are specialized lexical units which together with the nouns are part of the knowledge representation of the ICT subject field. Proposals for new research have been made together with some other suggestions for the design of computer applications to visually show the conceptual meaning of certain lexical units.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Three sets of laboratory column experimental results concerning the hydrogeochemistry of seawater intrusion have been modelled using two codes: ACUAINTRUSION (Chemical Engineering Department, University of Alicante) and PHREEQC (U.S.G.S.). These reactive models utilise the hydrodynamic parameters determined using the ACUAINTRUSION TRANSPORT software and fit the chloride breakthrough curves perfectly. The ACUAINTRUSION code was improved, and the instabilities were studied relative to the discretisation. The relative square errors were obtained using different combinations of the spatial and temporal steps: the global error for the total experimental data and the partial error for each element. Good simulations for the three experiments were obtained using the ACUAINTRUSION software with slight variations in the selectivity coefficients for both sediments determined in batch experiments with fresh water. The cation exchange parameters included in ACUAINTRUSION are those reported by the Gapon convention with modified exponents for the Ca/Mg exchange. PHREEQC simulations performed using the Gains-Thomas convention were unsatisfactory, with the exchange coefficients from the database of PHREEQC (or range), but those determined with fresh water – natural sediment allowed only an approximation to be obtained. For the treated sediment, the adjusted exchange coefficients were determined to improve the simulation and are vastly different from those from the database of PHREEQC or batch experiment values; however, these values fall in an order similar to the others determined under dynamic conditions. Different cation concentrations were simulated using two different software packages; this disparity could be attributed to the defined selectivity coefficients that affect the gypsum equilibrium. Consequently, different calculated sulphate concentrations are obtained using each type of software; a smaller mismatch was predicted using ACUAINTRUSION. In general, the presented simulations by ACUAINTRUSION and PHREEQC produced similar results, making predictions consistent with the experimental data. However, the simulated results are not identical to the experimental data; sulphate (total S) is overpredicted by both models, most likely due to such factors as the kinetics of gypsum, the possible variations in the exchange coefficients due to salinity and the neglect of other processes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, acoustic perturbation measurement has gained clinical and research popularity due to the ease of availability of commercial acoustic analysing software packages in the market. However, because the measurement itself depends critically on the accuracy of frequency tracking from the voice signal, researchers argue that perturbation measures are not suitable for analysing dysphonic voice samples, which are aperiodic in nature. This study compares the fundamental frequency, relative amplitude perturbation, shimmer percent and noise-to-harmonic ratio between a group of dysphonic and non-dysphonic subjects. One hundred and twelve dysphonic subjects ( 93 females and 19 males) and 41 non-dysphonic subjects ( 35 females and 6 males) participated in the study. All the 153 voice samples were categorized into type I ( periodic or nearly periodic), type II ( signals with subharmonic frequencies that approach the fundamental frequency) and type III ( aperiodic) signals. Only the type I ( periodic and nearly periodic) voice signals were acoustically analysed for perturbation measures. Results revealed that the dysphonic female group presented significantly lower fundamental frequency, significantly higher relative amplitude perturbation and shimmer percent values than the non-dysphonic female group. However, none of these three perturbation measures were able to differentiate between male dysphonic and male non-dysphonic subjects. The noise-to-harmonic ratio failed to differentiate between the dysphonic and non-dysphonic voices for both gender groups. These results question the sensitivity of acoustic perturbation measures in detecting dysphonia and suggest that contemporary acoustic perturbation measures are not suitable for analysing dysphonic voice signals, which are even nearly periodic. Copyright (C) 2005 S. Karger AG, Basel.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a metachain to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely. [Bayesian phylogenetic inference; heating parameter; Markov chain Monte Carlo; replicated chains.]

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Species extinctions and the deterioration of other biodiversity features worldwide have led to the adoption of systematic conservation planning in many regions of the world. As a consequence, various software tools for conservation planning have been developed over the past twenty years. These, tools implement algorithms designed to identify conservation area networks for the representation and persistence of biodiversity features. Budgetary, ethical, and other sociopolitical constraints dictate that the prioritized sites represent biodiversity with minimum impact on human interests. Planning tools are typically also used to satisfy these criteria. This chapter reviews both the concepts and technical choices that underlie the development of these tools. Conservation planning problems can be formulated as optimization problems, and we evaluate the suitability of different algorithms for their solution. Finally, we also review some key issues associated with the use of these tools, such as computational efficiency, the effectiveness of taxa and abiotic parameters at choosing surrogates for biodiversity, the process of setting explicit targets of representation for biodiversity surrogates, and

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most widely-used computer software packages, such as word processors, spreadsheets and web browsers, incorporate comprehensive help systems, partly because the software is meant for those with little technical knowledge. This paper identifies four systematic philosophies or approaches to help system delivery, namely the documentation approach, based on written documents, either paper-based or online; the training approach, either offered before the user starts working on the software or on-the-job; intelligent help, that is online, context-sensitive help or that relying on software agents; and finally an approach based on minimalism, defined as providing help only when and where it is needed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The widespread implementation of Manufacturing Resource Planning (MRPII) systems in this country and abroad and the reported dissatisfaction with their use formed the initial basis of this piece of research which concentrates on the fundamental theory and design of the Closed Loop MRPII system itself. The dissertation concentrates on two key aspects namely; how Master Production Scheduling is carried out in differing business environments and how well the `closing of the loop' operates by checking the capcity requirements of the different levels of plans within an organisation. The main hypothesis which is tested is that in U.K. manufacturing industry, resource checks are either not being carried out satisfactorily or they are not being fed back to the appropriate plan in a timely fashion. The research methodology employed involved initial detailed investigations into Master Scheduling and capacity planning in eight diverse manufacturing companies. This was followed by a nationwide survey of users in 349 companies, a survey of all the major suppliers of Production Management software in the U.K. and an analysis of the facilities offered by current software packages. The main conclusion which is drawn is that the hypothesis is proved in the majority of companies in that only just over 50% of companies are attempting Resource and Capacity Planning and only 20% are successfully feeding back CRP information to `close the loop'. Various causative factors are put forward and remedies are suggested.