855 resultados para DDM Data Distribution Management testbed benchmark design implementation instance generator


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data Distribution Management (DDM) is a core part of High Level Architecture standard, as its goal is to optimize the resources used by simulation environments to exchange data. It has to filter and match the set of information generated during a simulation, so that each federate, that is a simulation entity, only receives the information it needs. It is important that this is done quickly and to the best in order to get better performances and avoiding the transmission of irrelevant data, otherwise network resources may saturate quickly. The main topic of this thesis is the implementation of a super partes DDM testbed. It evaluates the goodness of DDM approaches, of all kinds. In fact it supports both region and grid based approaches, and it may support other different methods still unknown too. It uses three factors to rank them: execution time, memory and distance from the optimal solution. A prearranged set of instances is already available, but we also allow the creation of instances with user-provided parameters. This is how this thesis is structured. We start introducing what DDM and HLA are and what do they do in details. Then in the first chapter we describe the state of the art, providing an overview of the most well known resolution approaches and the pseudocode of the most interesting ones. The third chapter describes how the testbed we implemented is structured. In the fourth chapter we expose and compare the results we got from the execution of four approaches we have implemented. The result of the work described in this thesis can be downloaded on sourceforge using the following link: https://sourceforge.net/projects/ddmtestbed/. It is licensed under the GNU General Public License version 3.0 (GPLv3).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il Data Distribution Management (DDM) è un componente dello standard High Level Architecture. Il suo compito è quello di rilevare le sovrapposizioni tra update e subscription extent in modo efficiente. All'interno di questa tesi si discute la necessità di avere un framework e per quali motivi è stato implementato. Il testing di algoritmi per un confronto equo, librerie per facilitare la realizzazione di algoritmi, automatizzazione della fase di compilazione, sono motivi che sono stati fondamentali per iniziare la realizzazione framework. Il motivo portante è stato che esplorando articoli scientifici sul DDM e sui vari algoritmi si è notato che in ogni articolo si creavano dei dati appositi per fare dei test. L'obiettivo di questo framework è anche quello di riuscire a confrontare gli algoritmi con un insieme di dati coerente. Si è deciso di testare il framework sul Cloud per avere un confronto più affidabile tra esecuzioni di utenti diversi. Si sono presi in considerazione due dei servizi più utilizzati: Amazon AWS EC2 e Google App Engine. Sono stati mostrati i vantaggi e gli svantaggi dell'uno e dell'altro e il motivo per cui si è scelto di utilizzare Google App Engine. Si sono sviluppati quattro algoritmi: Brute Force, Binary Partition, Improved Sort, Interval Tree Matching. Sono stati svolti dei test sul tempo di esecuzione e sulla memoria di picco utilizzata. Dai risultati si evince che l'Interval Tree Matching e l'Improved Sort sono i più efficienti. Tutti i test sono stati svolti sulle versioni sequenziali degli algoritmi e che quindi ci può essere un riduzione nel tempo di esecuzione per l'algoritmo Interval Tree Matching.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As network capacity has increased over the past decade, individuals and organisations have found it increasingly appealing to make use of remote services in the form of service-oriented architectures and cloud computing services. Data processed by remote services, however, is no longer under the direct control of the individual or organisation that provided the data, leaving data owners at risk of data theft or misuse. This paper describes a model by which data owners can control the distribution and use of their data throughout a dynamic coalition of service providers using digital rights management technology. Our model allows a data owner to establish the trustworthiness of every member of a coalition employed to process data, and to communicate a machine-enforceable usage policy to every such member.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The materials information requirements of the aerospace sector are considered, specifically 'consolidation' (management of raw test data), 'analysis' (investigation of material trade-offs) and 'dissemination (secure distribution of data throughout an organization). An information architecture that satisfies the complex requirements of the aerospace materials industry is discussed and a case-study is presented. © 2003 by Granta Design Limited. Published by the American Institute of Aeronautics and Astronautics, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hoy en día, por primera vez en la historia, la mayor parte de la población podrá vivir hasta los sesenta años y más (United Nations, 2015). Sin embargo, todavía existe poca evidencia que demuestre que las personas mayores, estén viviendo con mejor salud que sus padres, a la misma edad, ya que la mayoría de los problemas de salud en edades avanzadas están asociados a las enfermedades crónicas (WHO, 2015). Los sistemas sanitarios de los países desarrollados funcionan adecuadamente cuando se trata del cuidado de enfermedades agudas, pero no son lo suficientemente eficaces en la gestión de las enfermedades crónicas. Durante la última década, se han realizado esfuerzos para mejorar esta gestión, por medio de la utilización de estrategias de prevención y de reenfoque de la provisión de los servicios de atención para la salud (Kane et al. 2005). Según una revisión sistemática de modelos de cuidado de salud, comisionada por el sistema nacional de salud Británico, pocos modelos han conceptualizado cuáles son los componentes que hay que utilizar para proporcionar un cuidado crónico efectivo, y estos componentes no han sido suficientemente estructurados y articulados. Por lo tanto, no hay suficiente evidencia sobre el impacto real de cualquier modelo existente en la actualidad (Ham, 2006). Las innovaciones podrían ayudar a conseguir mejores diagnósticos, tratamientos y gestión de pacientes crónicos, así como a dar soporte a los profesionales y a los pacientes en el cuidado. Sin embargo, la forma en las que estas innovaciones se proporcionan no es lo suficientemente eficiente, efectiva y amigable para el usuario. Para mejorar esto, hace falta crear equipos de trabajo y estrategias multidisciplinares. En conclusión, hacen falta actividades que permitan conseguir que las innovaciones sean utilizadas en los sistemas de salud que quieren mejorar la gestión del cuidado crónico, para que sea posible: 1) traducir la “atención sanitaria basada en la evidencia” en “conocimiento factible”; 2) hacer frente a la complejidad de la atención sanitaria a través de una investigación multidisciplinaria; 3) identificar una aproximación sistemática para que se establezcan intervenciones innovadoras en el cuidado de salud. El marco de referencia desarrollado en este trabajo de investigación es un intento de aportar estas mejoras. Las siguientes hipótesis han sido propuestas: Hipótesis 1: es posible definir un proceso de traducción que convierta un modelo de cuidado crónico en una descripción estructurada de objetivos, requisitos e indicadores clave de rendimiento. Hipótesis 2: el proceso de traducción, si se ejecuta a través de elementos basados en la evidencia, multidisciplinares y de orientación económica, puede convertir un modelo de cuidado crónico en un marco descriptivo, que define el ciclo de vida de soluciones innovadoras para el cuidado de enfermedades crónicas. Hipótesis 3: es posible definir un método para evaluar procesos, resultados y capacidad de desarrollar habilidades, y asistir equipos multidisciplinares en la creación de soluciones innovadoras para el cuidado crónico. Hipótesis 4: es posible dar soporte al desarrollo de soluciones innovadoras para el cuidado crónico a través de un marco de referencia y conseguir efectos positivos, medidos en indicadores clave de rendimiento. Para verificar las hipótesis, se ha definido una aproximación metodológica compuesta de cuatro Fases, cada una asociada a una hipótesis. Antes de esto, se ha llevado a cabo una “Fase 0”, donde se han analizado los antecedentes sobre el problema (i.e. adopción sistemática de la innovación en el cuidado crónico) desde una perspectiva multi-dominio y multi-disciplinar. Durante la fase 1, se ha desarrollado un Proceso de Traducción del Conocimiento, elaborado a partir del JBI Joanna Briggs Institute (JBI) model of evidence-based healthcare (Pearson, 2005), y sobre el cual se han definido cuatro Bloques de Innovación. Estos bloques consisten en una descripción de elementos innovadores, definidos en la fase 0, que han sido añadidos a los cuatros elementos que componen el modelo JBI. El trabajo llevado a cabo en esta fase ha servido también para definir los materiales que el proceso de traducción tiene que ejecutar. La traducción que se ha llevado a cabo en la fase 2, y que traduce la mejor evidencia disponible de cuidado crónico en acción: resultado de este proceso de traducción es la parte descriptiva del marco de referencia, que consiste en una descripción de un modelo de cuidado crónico (se ha elegido el Chronic Care Model, Wagner, 1996) en términos de objetivos, especificaciones e indicadores clave de rendimiento y organizada en tres ciclos de innovación (diseño, implementación y evaluación). Este resultado ha permitido verificar la segunda hipótesis. Durante la fase 3, para demostrar la tercera hipótesis, se ha desarrollado un método-mixto de evaluación de equipos multidisciplinares que trabajan en innovaciones para el cuidado crónico. Este método se ha creado a partir del método mixto usado para la evaluación de equipo multidisciplinares translacionales (Wooden, 2013). El método creado añade una dimensión procedural al marco. El resultado de esta fase consiste, por lo tanto, en una primera versión del marco de referencia, lista para ser experimentada. En la fase 4, se ha validado el marco a través de un caso de estudio multinivel y con técnicas de observación-participante como método de recolección de datos. Como caso de estudio se han elegido las actividades de investigación que el grupo de investigación LifeStech ha desarrollado desde el 2008 para mejorar la gestión de la diabetes, actividades realizadas en un contexto internacional. Los resultados demuestran que el marco ha permitido mejorar las actividades de trabajo en distintos niveles: 1) la calidad y cantidad de las publicaciones; 2) se han conseguido dos contratos de investigación sobre diabetes: el primero es un proyecto de investigación aplicada, el segundo es un proyecto financiado para acelerar las innovaciones en el mercado; 3) a través de los indicadores claves de rendimiento propuestos en el marco, una prueba de concepto de un prototipo desarrollado en un proyecto de investigación ha sido transformada en una evaluación temprana de una intervención eHealth para el manejo de la diabetes, que ha sido recientemente incluida en Repositorio de prácticas innovadoras del Partenariado de Innovación Europeo en Envejecimiento saludable y activo. La verificación de las 4 hipótesis ha permitido demonstrar la hipótesis principal de este trabajo de investigación: es posible contribuir a crear un puente entre la atención sanitaria y la innovación y, por lo tanto, mejorar la manera en que el cuidado crónico sea procurado en los sistemas sanitarios. ABSTRACT Nowadays, for the first time in history, most people can expect to live into their sixties and beyond (United Nations, 2015). However, little evidence suggests that older people are experiencing better health than their parents, and most of the health problems of older age are linked to Chronic Diseases (WHO, 2015). The established health care systems in developed countries are well suited to the treatment of acute diseases but are mostly inadequate for dealing with CDs. Healthcare systems are challenging the burden of chronic diseases by putting more emphasis on the prevention of disease and by looking for new ways to reorient the provision of care (Kane et al., 2005). According to an evidence-based review commissioned by the British NHS Institute, few models have conceptualized effective components of care for CDs and these components have been not structured and articulated. “Consequently, there is limited evidence about the real impact of any of the existing models” (Ham, 2006). Innovations could support to achieve better diagnosis, treatment and management for patients across the continuum of care, by supporting health professionals and empowering patients to take responsibility. However, the way they are delivered is not sufficiently efficient, effective and consumer friendly. The improvement of innovation delivery, involves the creation of multidisciplinary research teams and taskforces, rather than just working teams. There are several actions to improve the adoption of innovations from healthcare systems that are tackling the epidemics of CDs: 1) Translate Evidence-Based Healthcare (EBH) into actionable knowledge; 2) Face the complexity of healthcare through multidisciplinary research; 3) Identify a systematic approach to support effective implementation of healthcare interventions through innovation. The framework proposed in this research work is an attempt to provide these improvements. The following hypotheses have been drafted: Hypothesis 1: it is possible to define a translation process to convert a model of chronic care into a structured description of goals, requirements and key performance indicators. Hypothesis 2: a translation process, if executed through evidence-based, multidisciplinary, holistic and business-oriented elements, can convert a model of chronic care in a descriptive framework, which defines the whole development cycle of innovative solutions for chronic disease management. Hypothesis 3: it is possible to design a method to evaluate processes, outcomes and skill acquisition capacities, and assist multidisciplinary research teams in the creation of innovative solutions for chronic disease management. Hypothesis 4: it is possible to assist the development of innovative solutions for chronic disease management through a reference framework and produce positive effects, measured through key performance indicators. In order to verify the hypotheses, a methodological approach, composed of four Phases that correspond to each one of the stated hypothesis, was defined. Prior to this, a “Phase 0”, consisting in a multi-domain and multi-disciplinary background analysis of the problem (i.e.: systematic adoption of innovation to chronic care), was carried out. During phase 1, in order to verify the first hypothesis, a Knowledge Translation Process (KTP) was developed, starting from the JBI Joanna Briggs Institute (JBI) model of evidence-based healthcare was used (Pearson, 2005) and adding Four Innovation Blocks. These blocks represent an enriched description, added to the JBI model, to accelerate the transformation of evidence-healthcare through innovation; the innovation blocks are built on top of the conclusions drawn after Phase 0. The background analysis gave also indication on the materials and methods to be used for the execution of the KTP, carried out during phase 2, that translates the actual best available evidence for chronic care into action: this resulted in a descriptive Framework, which is a description of a model of chronic care (the Chronic Care Model was chosen, Wagner, 1996) in terms of goals, specified requirements and Key Performance Indicators, and articulated in the three development cycles of innovation (i.e. design, implementation and evaluation). Thanks to this result the second hypothesis was verified. During phase 3, in order to verify the third hypothesis, a mixed-method to evaluate multidisciplinary teams working on innovations for chronic care, was created, based on a mixed-method used for the evaluation of Multidisciplinary Translational Teams (Wooden, 2013). This method adds a procedural dimension to the descriptive component of the Framework, The result of this phase consisted in a draft version of the framework, ready to be tested in a real scenario. During phase 4, a single and multilevel case study, with participant-observation data collection, was carried out, in order to have a complete but at the same time multi-sectorial evaluation of the framework. The activities that the LifeStech research group carried out since 2008 to improve the management of diabetes have been selected as case study. The results achieved showed that the framework allowed to improve the research activities in different directions: the quality and quantity of the research publications that LifeStech has issued, have increased substantially; 2 project grants to improve the management of diabetes, have been assigned: the first is a grant funding applied research while the second is about accelerating innovations into the market; by using the assessment KPIs of the framework, the proof of concept validation of a prototype developed in a research project was transformed into an early stage assessment of innovative eHealth intervention for Diabetes Management, which has been recently included in the repository of innovative practice of the European Innovation Partnership on Active and Health Ageing initiative. The verification of the 4 hypotheses lead to verify the main hypothesis of this research work: it is possible to contribute to bridge the gap between healthcare and innovation and, in turn, improve the way chronic care is delivered by healthcare systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design and implementation of data bases involve, firstly, the formulation of a conceptual data model by systematic analysis of the structure and information requirements of the organisation for which the system is being designed; secondly, the logical mapping of this conceptual model onto the data structure of the target data base management system (DBMS); and thirdly, the physical mapping of this structured model into storage structures of the target DBMS. The accuracy of both the logical and physical mapping determine the performance of the resulting systems. This thesis describes research which develops software tools to facilitate the implementation of data bases. A conceptual model describing the information structure of a hospital is derived using the Entity-Relationship (E-R) approach and this model forms the basis for mapping onto the logical model. Rules are derived for automatically mapping the conceptual model onto relational and CODASYL types of data structures. Further algorithms are developed for partly automating the implementation of these models onto INGRES, MIMER and VAX-11 DBMS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports on the design, implementation and outcomes of a mentoring program involving 18 employees in the IT Division of WorkCover Queensland. The paper provides some background information to the development of the program and the design and implementation phases including recruitment and matching of participants, orientation and training, and the mentoring process including transition and/or termination. The paper also outlines the quantitative and qualitative evaluation processes that occurred and the outcomes of that evaluation. Results indicated a wealth of positive individual, mentoring, and organisational outcomes. The organisation and semi-structured processes provided in the program are considered as major contributing factors to the successful outcomes of the program. These outcomes are likely to have long-term benefits for the individuals involved, the IT Division, and the broader organisation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dashboards are expected to improve decision making by amplifying cognition and capitalizing on human perceptual capabilities. Hence, interest in dashboards has increased recently, which is also evident from the proliferation of dashboard solution providers in the market. Despite dashboards' popularity, little is known about the extent of their effectiveness, i.e. what types of dashboards work best for different users or tasks. In this paper, we conduct a comprehensive multidisciplinary literature review with an aim to identify the critical issues organizations might need to consider when implementing dashboards. Dashboards are likely to succeed and solve the problems of presentation format and information load when certain visualization principles and features are present (e.g. high data-ink ratio and drill down features).Werecommend that dashboards come with some level of flexibility, i.e. allowing users to switch between alternative presentation formats. Also some theory driven guidance through popups and warnings can help users to select an appropriate presentation format. Given the dearth of research on dashboards, we conclude the paper with a research agenda that could guide future studies in this area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiple reaction monitoring (MRM) mass spectrometry coupled with stable isotope dilution (SID) and liquid chromatography (LC) is increasingly used in biological and clinical studies for precise and reproducible quantification of peptides and proteins in complex sample matrices. Robust LC-SID-MRM-MS-based assays that can be replicated across laboratories and ultimately in clinical laboratory settings require standardized protocols to demonstrate that the analysis platforms are performing adequately. We developed a system suitability protocol (SSP), which employs a predigested mixture of six proteins, to facilitate performance evaluation of LC-SID-MRM-MS instrument platforms, configured with nanoflow-LC systems interfaced to triple quadrupole mass spectrometers. The SSP was designed for use with low multiplex analyses as well as high multiplex approaches when software-driven scheduling of data acquisition is required. Performance was assessed by monitoring of a range of chromatographic and mass spectrometric metrics including peak width, chromatographic resolution, peak capacity, and the variability in peak area and analyte retention time (RT) stability. The SSP, which was evaluated in 11 laboratories on a total of 15 different instruments, enabled early diagnoses of LC and MS anomalies that indicated suboptimal LC-MRM-MS performance. The observed range in variation of each of the metrics scrutinized serves to define the criteria for optimized LC-SID-MRM-MS platforms for routine use, with pass/fail criteria for system suitability performance measures defined as peak area coefficient of variation <0.15, peak width coefficient of variation <0.15, standard deviation of RT <0.15 min (9 s), and the RT drift <0.5min (30 s). The deleterious effect of a marginally performing LC-SID-MRM-MS system on the limit of quantification (LOQ) in targeted quantitative assays illustrates the use and need for a SSP to establish robust and reliable system performance. Use of a SSP helps to ensure that analyte quantification measurements can be replicated with good precision within and across multiple laboratories and should facilitate more widespread use of MRM-MS technology by the basic biomedical and clinical laboratory research communities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed systems are widely used for solving large-scale and data-intensive computing problems, including all-to-all comparison (ATAC) problems. However, when used for ATAC problems, existing computational frameworks such as Hadoop focus on load balancing for allocating comparison tasks, without careful consideration of data distribution and storage usage. While Hadoop-based solutions provide users with simplicity of implementation, their inherent MapReduce computing pattern does not match the ATAC pattern. This leads to load imbalances and poor data locality when Hadoop's data distribution strategy is used for ATAC problems. Here we present a data distribution strategy which considers data locality, load balancing and storage savings for ATAC computing problems in homogeneous distributed systems. A simulated annealing algorithm is developed for data distribution and task scheduling. Experimental results show a significant performance improvement for our approach over Hadoop-based solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This technical memorandum documents the design, implementation, data preparation, and descriptive results for the 2006 Annual Economic Survey of Federal Gulf Shrimp Permit Holders. The data collection was designed by the NOAA Fisheries Southeast Fisheries Science Center Social Science Research Group to track the financial and economic status and performance by vessels holding a federal moratorium permit for harvesting shrimp in the Gulf of Mexico. A two page, self-administered mail survey collected total annual costs broken out into seven categories and auxiliary economic data. In May 2007, 580 vessels were randomly selected, stratified by state, from a preliminary population of 1,709 vessels with federal permits to shrimp in offshore waters of the Gulf of Mexico. The survey was implemented during the rest of 2007. After many reminder and verification phone calls, 509 surveys were deemed complete, for an ineligibility-adjusted response rate of 90.7%. The linking of each individual vessel’s cost data to its revenue data from a different data collection was imperfect, and hence the final number of observations used in the analyses is 484. Based on various measures and tests of validity throughout the technical memorandum, the quality of the data is high. The results are presented in a standardized table format, linking vessel characteristics and operations to simple balance sheet, cash flow, and income statements. In the text, results are discussed for the total fleet, the Gulf shrimp fleet, the active Gulf shrimp fleet, and the inactive Gulf shrimp fleet. Additional results for shrimp vessels grouped by state, by vessel characteristics, by landings volume, and by ownership structure are available in the appendices. The general conclusion of this report is that the financial and economic situation is bleak for the average vessels in most of the categories that were evaluated. With few exceptions, cash flow for the average vessel is positive while the net revenue from operations and the “profit” are negative. With negative net revenue from operations, the economic return for average shrimp vessels is less than zero. Only with the help of government payments does the average owner just about break even. In the short-term, this will discourage any new investments in the industry. The financial situation in 2006, especially if it endures over multiple years, also is economically unsustainable for the average established business. Vessels in the active and inactive Gulf shrimp fleet are, on average, 69 feet long, weigh 105 gross tons, are powered by 505 hp motor(s), and are 23 years old. Three-quarters of the vessels have steel hulls and 59% use a freezer for refrigeration. The average market value of these vessels was $175,149 in 2006, about a hundred-thousand dollars less than the average original purchase price. The outstanding loans averaged $91,955, leading to an average owner equity of $83,194. Based on the sample, 85% of the federally permitted Gulf shrimp fleet was actively shrimping in 2006. Of these 386 active Gulf shrimp vessels, just under half (46%) were owner-operated. On average, these vessels burned 52,931 gallons of fuel, landed 101,268 pounds of shrimp, and received $2.47 per pound of shrimp. Non-shrimp landings added less than 1% to cash flow, indicating that the federal Gulf shrimp fishery is very specialized. The average total cash outflow was $243,415 of which $108,775 was due to fuel expenses alone. The expenses for hired crew and captains were on average $54,866 which indicates the importance of the industry as a source of wage income. The resulting average net cash flow is $16,225 but has a large standard deviation. For the population of active Gulf shrimp vessels we can state with 95% certainty that the average net cash flow was between $9,500 and $23,000 in 2006. The median net cash flow was $11,843. Based on the income statement for active Gulf shrimp vessels, the average fixed costs accounted for just under a quarter of operating expenses (23.1%), labor costs for just over a quarter (25.3%), and the non-labor variable costs for just over half (51.6%). The fuel costs alone accounted for 42.9% of total operating expenses in 2006. It should be noted that the labor cost category in the income statement includes both the actual cash payments to hired labor and an estimate of the opportunity cost of owner-operators’ time spent as captain. The average labor contribution (as captain) of an owner-operator is estimated at about $19,800. The average net revenue from operations is negative $7,429, and is statistically different and less than zero in spite of a large standard deviation. The economic return to Gulf shrimping is negative 4%. Including non-operating activities, foremost an average government payment of $13,662, leads to an average loss before taxes of $907 for the vessel owners. The confidence interval of this value straddles zero, so we cannot reject, with 95% certainty, that the population average is zero. The average inactive Gulf shrimp vessel is generally of a smaller scale than the average active vessel. Inactive vessels are physically smaller, are valued much lower, and are less dependent on loans. Fixed costs account for nearly three quarters of the total operating expenses of $11,926, and only 6% of these vessels have hull insurance. With an average net cash flow of negative $7,537, the inactive Gulf shrimp fleet has a major liquidity problem. On average, net revenue from operations is negative $11,396, which amounts to a negative 15% economic return, and owners lose $9,381 on their vessels before taxes. To sustain such losses and especially to survive the negative cash flow, many of the owners must be subsidizing their shrimp vessels with the help of other income or wealth sources or are drawing down their equity. Active Gulf shrimp vessels in all states but Texas exhibited negative returns. The Alabama and Mississippi fleets have the highest assets (vessel values), on average, yet they generate zero cash flow and negative $32,224 net revenue from operations. Due to their high (loan) leverage ratio the negative 11% economic return is amplified into a negative 21% return on equity. In contrast, for Texas vessels, which actually have the highest leverage ratio among the states, a 1% economic return is amplified into a 13% return on equity. From a financial perspective, the average Florida and Louisiana vessels conform roughly to the overall average of the active Gulf shrimp fleet. It should be noted that these results are averages and hence hide the variation that clearly exists within all fleets and all categories. Although the financial situation for the average vessel is bleak, some vessels are profitable. (PDF contains 101 pages)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the design, implementation and testing of an intelligent knowledge-based supervisory control (IKBSC) system for a hot rolling mill process. A novel architecture is used to integrate an expert system with an existing supervisory control system and a new optimization methodology for scheduling the soaking pits in which the material is heated prior to rolling. The resulting IKBSC system was applied to an aluminium hot rolling mill process to improve the shape quality of low-gauge plate and to optimise the use of the soaking pits to reduce energy consumption. The results from the trials demonstrate the advantages to be gained from the IKBSC system that integrates knowledge contained within data, plant and human resources with existing model-based systems. (c) 2005 Elsevier Ltd. All rights reserved.