967 resultados para Multilevel Systems Model
Resumo:
Whilst traditional optimisation techniques based on mathematical programming techniques are in common use, they suffer from their inability to explore the complexity of decision problems addressed using agricultural system models. In these models, the full decision space is usually very large while the solution space is characterized by many local optima. Methods to search such large decision spaces rely on effective sampling of the problem domain. Nevertheless, problem reduction based on insight into agronomic relations and farming practice is necessary to safeguard computational feasibility. Here, we present a global search approach based on an Evolutionary Algorithm (EA). We introduce a multi-objective evaluation technique within this EA framework, linking the optimisation procedure to the APSIM cropping systems model. The approach addresses the issue of system management when faced with a trade-off between economic and ecological consequences.
Resumo:
The survival of organisations, especially SMEs, depends, to the greatest extent, on those who supply them with the required material input. This is because if the supplier fails to deliver the right materials at the right time and place, and at the right price, then the recipient organisation is bound to fail in its obligations to satisfy the needs of its customers, and to stay in business. Hence, the task of choosing a supplier(s) from a list of vendors, that an organisation will trust with its very existence, is not an easy one. This project investigated how purchasing personnel in organisations solve the problem of vendor selection. The investigation went further to ascertain whether an Expert Systems model could be developed and used as a plausible solution to the problem. An extensive literature review indicated that very scanty research has been conducted in the area of Expert Systems for Vendor Selection, whereas many research theories in expert systems and in purchasing and supply management chain, respectively, had been reported. A survey questionnaire was designed and circulated to people in the industries who actually perform the vendor selection tasks. Analysis of the collected data confirmed the various factors which are considered during the selection process, and established the order in which those factors are ranked. Five of the factors, namely, Production Methods Used, Vendors Financial Background, Manufacturing Capacity, Size of Vendor Organisations, and Suppliers Position in the Industry; appeared to have similar patterns in the way organisations ranked them. These patterns suggested that the bigger the organisation, the more importantly they regarded the above factors. Further investigations revealed that respondents agreed that the most important factors were: Product Quality, Product Price and Delivery Date. The most apparent pattern was observed for the Vendors Financial Background. This generated curiosity which led to the design and development of a prototype expert system for assessing the financial profile of a potential supplier(s). This prototype was called ESfNS. It determines whether a prospective supplier(s) has good financial background or not. ESNS was tested by the potential users who then confirmed that expert systems have great prospects and commercial viability in the domain for solving vendor selection problems.
Resumo:
We analyzed projections of current and future ambient temperatures along the eastern United States in relationship to the thermal tolerance of harbor seals in air. Using the earth systems model (HadGEM2-ES) and representative concentration pathways (RCPs) 4.5 and 8.5, which are indicative of two different atmospheric CO2 concentrations, we were able to examine possible shifts in distribution based on three metrics: current preferences, the thermal limit of juveniles, and the thermal limits of adults. Our analysis focused on average ambient temperatures because harbor seals are least effective at regulating their body temperature in air, making them most susceptible to rising air temperatures in the coming years. Our study focused on the months of May, June, and August from 2041-2060 (2050) and 2061-2080 (2070) as these are the historic months in which harbor seals are known to annually come ashore to pup, breed, and molt. May, June, and August are also some of the warmest months of the year. We found that breeding colonies along the eastern United States will be limited by the thermal tolerance of juvenile harbor seals in air, while their foraging range will extend as far south as the thermal tolerance of adult harbor seals in air. Our analysis revealed that in 2070, harbor seal pups should be absent from the United States coastline nearing the end of the summer due to exceptionally high air temperatures.
Resumo:
Safeguarding organizations against opportunism and severe deception in computer-mediated communication (CMC) presents a major challenge to CIOs and IT managers. New insights into linguistic cues of deception derive from the speech acts innate to CMC. Applying automated text analysis to archival email exchanges in a CMC system as part of a reward program, we assess the ability of word use (micro-level), message development (macro-level), and intertextual exchange cues (meta-level) to detect severe deception by business partners. We empirically assess the predictive ability of our framework using an ordinal multilevel regression model. Results indicate that deceivers minimize the use of referencing and self-deprecation but include more superfluous descriptions and flattery. Deceitful channel partners also over structure their arguments and rapidly mimic the linguistic style of the account manager across dyadic e-mail exchanges. Thanks to its diagnostic value, the proposed framework can support firms’ decision-making and guide compliance monitoring system development.
Resumo:
Adaptability and invisibility are hallmarks of modern terrorism, and keeping pace with its dynamic nature presents a serious challenge for societies throughout the world. Innovations in computer science have incorporated applied mathematics to develop a wide array of predictive models to support the variety of approaches to counterterrorism. Predictive models are usually designed to forecast the location of attacks. Although this may protect individual structures or locations, it does not reduce the threat—it merely changes the target. While predictive models dedicated to events or social relationships receive much attention where the mathematical and social science communities intersect, models dedicated to terrorist locations such as safe-houses (rather than their targets or training sites) are rare and possibly nonexistent. At the time of this research, there were no publically available models designed to predict locations where violent extremists are likely to reside. This research uses France as a case study to present a complex systems model that incorporates multiple quantitative, qualitative and geospatial variables that differ in terms of scale, weight, and type. Though many of these variables are recognized by specialists in security studies, there remains controversy with respect to their relative importance, degree of interaction, and interdependence. Additionally, some of the variables proposed in this research are not generally recognized as drivers, yet they warrant examination based on their potential role within a complex system. This research tested multiple regression models and determined that geographically-weighted regression analysis produced the most accurate result to accommodate non-stationary coefficient behavior, demonstrating that geographic variables are critical to understanding and predicting the phenomenon of terrorism. This dissertation presents a flexible prototypical model that can be refined and applied to other regions to inform stakeholders such as policy-makers and law enforcement in their efforts to improve national security and enhance quality-of-life.
Resumo:
Introducción: El Cáncer es prevenible en algunos casos, si se evita la exposición a sustancias cancerígenas en el medio ambiente. En Colombia, Cundinamarca es uno de los departamentos con mayores incrementos en la tasa de mortalidad y en el municipio de Sibaté, habitantes han manifestado preocupación por el incremento de la enfermedad. En el campo de la salud ambiental mundial, la georreferenciación aplicada al estudio de fenómenos en salud, ha tenido éxito con resultados válidos. El estudio propuso usar herramientas de información geográfica, para generar análisis de tiempo y espacio que hicieran visible el comportamiento del cáncer en Sibaté y sustentaran hipótesis de influencias ambientales sobre concentraciones de casos. Objetivo: Obtener incidencia y prevalencia de casos de cáncer en habitantes de Sibaté y georreferenciar los casos en un periodo de 5 años, con base en indagación de registros. Metodología: Estudio exploratorio descriptivo de corte transversal,sobre todos los diagnósticos de cáncer entre los años 2010 a 2014, encontrados en los archivos de la Secretaria de Salud municipal. Se incluyeron unicamente quienes tuvieron residencia permanente en el municipio y fueron diagnosticados con cáncer entre los años de 2010 a 2104. Sobre cada caso se obtuvo género, edad, estrato socioeconómico, nivel académico, ocupación y estado civil. Para el análisis de tiempo se usó la fecha de diagnóstico y para el análisis de espacio, la dirección de residencia, tipo de cáncer y coordenada geográfica. Se generaron coordenadas geográficas con un equipo GPS Garmin y se crearon mapas con los puntos de la ubicación de las viviendas de los pacientes. Se proceso la información, con Epi Info 7 Resultados: Se encontraron 107 casos de cáncer registrados en la Secretaria de Salud de Sibaté, 66 mujeres, 41 hombres. Sin división de género, el 30.93% de la población presento cáncer del sistema reproductor, el 18,56% digestivo y el 17,53% tegumentario. Se presentaron 2 grandes casos de agrupaciones espaciales en el territorio estudiado, una en el Barrio Pablo Neruda con 12 (21,05%) casos y en el casco Urbano de Sibaté con 38 (66,67%) casos. Conclusión: Se corroboro que el análisis geográfico con variables espacio temporales y de exposición, puede ser la herramienta para generar hipótesis sobre asociaciones de casos de cáncer con factores ambientales.
Resumo:
Context. We study galaxy evolution and spatial patterns in the surroundings of a sample of 2dF groups. Aims. Our aim is to find evidence of galaxy evolution and clustering out to 10 times the virial radius of the groups and so redefine their properties according to the spatial patterns in the fields and relate them to galaxy evolution. Methods. Group members and interlopers were redefined after the identification of gaps in the redshift distribution. We then used exploratory spatial statistics based on the the second moment of the Ripley function to probe the anisotropy in the galaxy distribution around the groups. Results. We found an important anticorrelation between anisotropy around groups and the fraction of early-type galaxies in these fields. Our results illustrate how the dynamical state of galaxy groups can be ascertained by the systematic study of their neighborhoods. This is an important achievement, since the correct estimate of the extent to which galaxies are affected by the group environment and follow large-scale filamentary structure is relevant to understanding the process of galaxy clustering and evolution in the Universe.
Resumo:
One of the standard generalized-gradient approximations (GGAs) in use in modern electronic-structure theory [Perdew-Burke-Ernzerhof (PBE) GGA] and a recently proposed modification designed specifically for solids (PBEsol) are identified as particular members of a family of functionals taking their parameters from different properties of homogeneous or inhomogeneous electron liquids. Three further members of this family are constructed and tested, together with the original PBE and PBEsol, for atoms, molecules, and solids. We find that PBE, in spite of its popularity in solid-state physics and quantum chemistry, is not always the best performing member of the family and that PBEsol, in spite of having been constructed specifically for solids, is not the best for solids. The performance of GGAs for finite systems is found to sensitively depend on the choice of constraints stemming from infinite systems. Guidelines both for users and for developers of density functionals emerge from this work.
Resumo:
Finite-size scaling analysis turns out to be a powerful tool to calculate the phase diagram as well as the critical properties of two-dimensional classical statistical mechanics models and quantum Hamiltonians in one dimension. The most used method to locate quantum critical points is the so-called crossing method, where the estimates are obtained by comparing the mass gaps of two distinct lattice sizes. The success of this method is due to its simplicity and the ability to provide accurate results even considering relatively small lattice sizes. In this paper, we introduce an estimator that locates quantum critical points by exploring the known distinct behavior of the entanglement entropy in critical and noncritical systems. As a benchmark test, we use this new estimator to locate the critical point of the quantum Ising chain and the critical line of the spin-1 Blume-Capel quantum chain. The tricritical point of this last model is also obtained. Comparison with the standard crossing method is also presented. The method we propose is simple to implement in practice, particularly in density matrix renormalization group calculations, and provides us, like the crossing method, amazingly accurate results for quite small lattice sizes. Our applications show that the proposed method has several advantages, as compared with the standard crossing method, and we believe it will become popular in future numerical studies.
Resumo:
Stream discharge-concentration relationships are indicators of terrestrial ecosystem function. Throughout the Amazon and Cerrado regions of Brazil rapid changes in land use and land cover may be altering these hydrochemical relationships. The current analysis focuses on factors controlling the discharge-calcium (Ca) concentration relationship since previous research in these regions has demonstrated both positive and negative slopes in linear log(10)discharge-log(10)Ca concentration regressions. The objective of the current study was to evaluate factors controlling stream discharge-Ca concentration relationships including year, season, stream order, vegetation cover, land use, and soil classification. It was hypothesized that land use and soil class are the most critical attributes controlling discharge-Ca concentration relationships. A multilevel, linear regression approach was utilized with data from 28 streams throughout Brazil. These streams come from three distinct regions and varied broadly in watershed size (< 1 to > 10(6) ha) and discharge (10(-5.7)-10(3.2) m(3) s(-1)). Linear regressions of log(10)Ca versus log(10)discharge in 13 streams have a preponderance of negative slopes with only two streams having significant positive slopes. An ANOVA decomposition suggests the effect of discharge on Ca concentration is large but variable. Vegetation cover, which incorporates aspects of land use, explains the largest proportion of the variance in the effect of discharge on Ca followed by season and year. In contrast, stream order, land use, and soil class explain most of the variation in stream Ca concentration. In the current data set, soil class, which is related to lithology, has an important effect on Ca concentration but land use, likely through its effect on runoff concentration and hydrology, has a greater effect on discharge-concentration relationships.
Resumo:
A procedure to evaluate mine rehabilitation practices during the operational phase was developed and validated. It is based on a comparison of actually observed or documented practices with internationally recommended best practices (BP). A set of 150 BP statements was derived from international guides in order to establish the benchmark. The statements are arranged in six rehabilitation programs under three categories: (1) planning (2) operational and (3) management, corresponding to the adoption of the plan-do-check-act management systems model to mine rehabilitation. The procedure consists of (i) performing technical inspections guided by a series of field forms containing BP statements; (ii) classifying evidences in five categories; and (iii) calculating conformity indexes and levels. For testing and calibration purposes, the procedure was applied to nine limestone quarries and conformity indexes were calculated for the rehabilitation programs in each quarry. Most quarries featured poor planning practices, operational practices reached high conformity levels in 50% of the cases and management practices scored moderate conformity. Despite all quarries being ISO 14001 certified, their management systems pay low attention to issues pertaining to land rehabilitation and biodiversity. The best results were achieved by a quarry whose expansion was recently submitted to the environmental impact assessment process, suggesting that public scrutiny may play a positive role in enhancing rehabilitation practices. Conformity indexes and levels can be used to chart the evolution of rehabilitation practices at regular intervals, to establish corporate goals and for communication with stakeholders. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The procedure for online process control by attributes consists of inspecting a single item at every m produced items. It is decided on the basis of the inspection result whether the process is in-control (the conforming fraction is stable) or out-of-control (the conforming fraction is decreased, for example). Most articles about online process control have cited the stoppage of the production process for an adjustment when the inspected item is non-conforming (then the production is restarted in-control, here denominated as corrective adjustment). Moreover, the articles related to this subject do not present semi-economical designs (which may yield high quantities of non-conforming items), as they do not include a policy of preventive adjustments (in such case no item is inspected), which can be more economical, mainly if the inspected item can be misclassified. In this article, the possibility of preventive or corrective adjustments in the process is decided at every m produced item. If a preventive adjustment is decided upon, then no item is inspected. On the contrary, the m-th item is inspected; if it conforms, the production goes on, otherwise, an adjustment takes place and the process restarts in-control. This approach is economically feasible for some practical situations and the parameters of the proposed procedure are determined minimizing an average cost function subject to some statistical restrictions (for example, to assure a minimal levelfixed in advanceof conforming items in the production process). Numerical examples illustrate the proposal.
Resumo:
The discrete-time neural network proposed by Hopfield can be used for storing and recognizing binary patterns. Here, we investigate how the performance of this network on pattern recognition task is altered when neurons are removed and the weights of the synapses corresponding to these deleted neurons are divided among the remaining synapses. Five distinct ways of distributing such weights are evaluated. We speculate how this numerical work about synaptic compensation may help to guide experimental studies on memory rehabilitation interventions.
Resumo:
Recent years have seen the introduction of new and varied designs of activated sludge plants. With increasing needs for higher efficiencies and lower costs, the possibility of a plant that operates more effectively has created the need for tools that can be used to evaluate and compare designs at the design stage. One such tool is the operating space diagram. It is the aim of this paper to present this tool and demonstrate its application and relevance to design using a simple case study. In the case study, use of the operating space diagram suggested changes in design that would improve the flexibility of the process. It also was useful for designing suitable control strategies.
Resumo:
Multi-environment trials (METs) used to evaluate breeding lines vary in the number of years that they sample. We used a cropping systems model to simulate the target population of environments (TPE) for 6 locations over 108 years for 54 'near-isolines' of sorghum in north-eastern Australia. For a single reference genotype, each of 547 trials was clustered into 1 of 3 'drought environment types' (DETs) based on a seasonal water stress index. Within sequential METs of 2 years duration, the frequencies of these drought patterns often differed substantially from those derived for the entire TPE. This was reflected in variation in the mean yield of the reference genotype. For the TPE and for 2-year METs, restricted maximum likelihood methods were used to estimate components of genotypic and genotype by environment variance. These also varied substantially, although not in direct correlation with frequency of occurrence of different DETs over a 2-year period. Combined analysis over different numbers of seasons demonstrated the expected improvement in the correlation between MET estimates of genotype performance and the overall genotype averages as the number of seasons in the MET was increased.