945 resultados para Management capacity indicator
Resumo:
The anticipated growth of air traffic worldwide requires enhanced Air Traffic Management (ATM) technologies and procedures to increase the system capacity, efficiency, and resilience, while reducing environmental impact and maintaining operational safety. To deal with these challenges, new automation and information exchange capabilities are being developed through different modernisation initiatives toward a new global operational concept called Trajectory Based Operations (TBO), in which aircraft trajectory information becomes the cornerstone of advanced ATM applications. This transformation will lead to higher levels of system complexity requiring enhanced Decision Support Tools (DST) to aid humans in the decision making processes. These will rely on accurate predicted aircraft trajectories, provided by advanced Trajectory Predictors (TP). The trajectory prediction process is subject to stochastic effects that introduce uncertainty into the predictions. Regardless of the assumptions that define the aircraft motion model underpinning the TP, deviations between predicted and actual trajectories are unavoidable. This thesis proposes an innovative method to characterise the uncertainty associated with a trajectory prediction based on the mathematical theory of Polynomial Chaos Expansions (PCE). Assuming univariate PCEs of the trajectory prediction inputs, the method describes how to generate multivariate PCEs of the prediction outputs that quantify their associated uncertainty. Arbitrary PCE (aPCE) was chosen because it allows a higher degree of flexibility to model input uncertainty. The obtained polynomial description can be used in subsequent prediction sensitivity analyses thanks to the relationship between polynomial coefficients and Sobol indices. The Sobol indices enable ranking the input parameters according to their influence on trajectory prediction uncertainty. The applicability of the aPCE-based uncertainty quantification detailed herein is analysed through a study case. This study case represents a typical aircraft trajectory prediction problem in ATM, in which uncertain parameters regarding aircraft performance, aircraft intent description, weather forecast, and initial conditions are considered simultaneously. Numerical results are compared to those obtained from a Monte Carlo simulation, demonstrating the advantages of the proposed method. The thesis includes two examples of DSTs (Demand and Capacity Balancing tool, and Arrival Manager) to illustrate the potential benefits of exploiting the proposed uncertainty quantification method.
Resumo:
Maintaining accessibility to and understanding of digital information over time is a complex challenge that often requires contributions and interventions from a variety of individuals and organizations. The processes of preservation planning and evaluation are fundamentally implicit and share similar complexity. Both demand comprehensive knowledge and understanding of every aspect of to-be-preserved content and the contexts within which preservation is undertaken. Consequently, means are required for the identification, documentation and association of those properties of data, representation and management mechanisms that in combination lend value, facilitate interaction and influence the preservation process. These properties may be almost limitless in terms of diversity, but are integral to the establishment of classes of risk exposure, and the planning and deployment of appropriate preservation strategies. We explore several research objectives within the course of this thesis. Our main objective is the conception of an ontology for risk management of digital collections. Incorporated within this are our aims to survey the contexts within which preservation has been undertaken successfully, the development of an appropriate methodology for risk management, the evaluation of existing preservation evaluation approaches and metrics, the structuring of best practice knowledge and lastly the demonstration of a range of tools that utilise our findings. We describe a mixed methodology that uses interview and survey, extensive content analysis, practical case study and iterative software and ontology development. We build on a robust foundation, the development of the Digital Repository Audit Method Based on Risk Assessment. We summarise the extent of the challenge facing the digital preservation community (and by extension users and creators of digital materials from many disciplines and operational contexts) and present the case for a comprehensive and extensible knowledge base of best practice. These challenges are manifested in the scale of data growth, the increasing complexity and the increasing onus on communities with no formal training to offer assurances of data management and sustainability. These collectively imply a challenge that demands an intuitive and adaptable means of evaluating digital preservation efforts. The need for individuals and organisations to validate the legitimacy of their own efforts is particularly prioritised. We introduce our approach, based on risk management. Risk is an expression of the likelihood of a negative outcome, and an expression of the impact of such an occurrence. We describe how risk management may be considered synonymous with preservation activity, a persistent effort to negate the dangers posed to information availability, usability and sustainability. Risk can be characterised according to associated goals, activities, responsibilities and policies in terms of both their manifestation and mitigation. They have the capacity to be deconstructed into their atomic units and responsibility for their resolution delegated appropriately. We continue to describe how the manifestation of risks typically spans an entire organisational environment, and as the focus of our analysis risk safeguards against omissions that may occur when pursuing functional, departmental or role-based assessment. We discuss the importance of relating risk-factors, through the risks themselves or associated system elements. To do so will yield the preservation best-practice knowledge base that is conspicuously lacking within the international digital preservation community. We present as research outcomes an encapsulation of preservation practice (and explicitly defined best practice) as a series of case studies, in turn distilled into atomic, related information elements. We conduct our analyses in the formal evaluation of memory institutions in the UK, US and continental Europe. Furthermore we showcase a series of applications that use the fruits of this research as their intellectual foundation. Finally we document our results in a range of technical reports and conference and journal articles. We present evidence of preservation approaches and infrastructures from a series of case studies conducted in a range of international preservation environments. We then aggregate this into a linked data structure entitled PORRO, an ontology relating preservation repository, object and risk characteristics, intended to support preservation decision making and evaluation. The methodology leading to this ontology is outlined, and lessons are exposed by revisiting legacy studies and exposing the resource and associated applications to evaluation by the digital preservation community.
Resumo:
This dissertation investigates customer behavior modeling in service outsourcing and revenue management in the service sector (i.e., airline and hotel industries). In particular, it focuses on a common theme of improving firms’ strategic decisions through the understanding of customer preferences. Decisions concerning degrees of outsourcing, such as firms’ capacity choices, are important to performance outcomes. These choices are especially important in high-customer-contact services (e.g., airline industry) because of the characteristics of services: simultaneity of consumption and production, and intangibility and perishability of the offering. Essay 1 estimates how outsourcing affects customer choices and market share in the airline industry, and consequently the revenue implications from outsourcing. However, outsourcing decisions are typically endogenous. A firm may choose whether to outsource or not based on what a firm expects to be the best outcome. Essay 2 contributes to the literature by proposing a structural model which could capture a firm’s profit-maximizing decision-making behavior in a market. This makes possible the prediction of consequences (i.e., performance outcomes) of future strategic moves. Another emerging area in service operations management is revenue management. Choice-based revenue systems incorporate discrete choice models into traditional revenue management algorithms. To successfully implement a choice-based revenue system, it is necessary to estimate customer preferences as a valid input to optimization algorithms. The third essay investigates how to estimate customer preferences when part of the market is consistently unobserved. This issue is especially prominent in choice-based revenue management systems. Normally a firm only has its own observed purchases, while those customers who purchase from competitors or do not make purchases are unobserved. Most current estimation procedures depend on unrealistic assumptions about customer arriving. This study proposes a new estimation methodology, which does not require any prior knowledge about the customer arrival process and allows for arbitrary demand distributions. Compared with previous methods, this model performs superior when the true demand is highly variable.
Resumo:
Given that landfills are depletable and replaceable resources, the right approach, when dealing with landfill management, is that of designing an optimal sequence of landfills rather than designing every single landfill separately. In this paper we use Optimal Control models, with mixed elements of both continuous and discrete time problems, to determine an optimal sequence of landfills, as regarding their capacity and lifetime. The resulting optimization problems involve splitting a time horizon of planning into several subintervals, the length of which has to be decided. In each of the subintervals some costs, the amount of which depends on the value of the decision variables, have to be borne. The obtained results may be applied to other economic problems such as private and public investments, consumption decisions on durable goods, etc.
Resumo:
Persistent daily congestion has been increasing in recent years, particularly along major corridors during selected periods in the mornings and evenings. On certain segments, these roadways are often at or near capacity. However, a conventional Predefined control strategy did not fit the demands that changed over time, making it necessary to implement the various dynamical lane management strategies discussed in this thesis. Those strategies include hard shoulder running, reversible HOV lanes, dynamic tolls and variable speed limit. A mesoscopic agent-based DTA model is used to simulate different strategies and scenarios. From the analyses, all strategies aim to mitigate congestion in terms of the average speed and average density. The largest improvement can be found in hard shoulder running and reversible HOV lanes while the other two provide more stable traffic. In terms of average speed and travel time, hard shoulder running is the most congested strategy for I-270 to help relieve the traffic pressure.
Resumo:
Doutoramento em Engenharia Agronómica - Instituto Superior de Agronomia - UL
Resumo:
Background: Management of hyperbilirubinemia remains a challenge for neonatal medicine because of the risk of neurological complications related to the toxicity of severe hyperbilirubinemia. Objectives: The purpose of this study was to examine the validity of cord blood alkaline phosphatase level for predicting neonatal hyperbilirubinemia. Patients and Methods: Between October and December 2013 a total of 102 healthy term infants born to healthy mothers were studied. Cord blood samples were collected for measurement of alkaline Phosphatase levels immediately after birth. Neonates were followed-up for the emergence of jaundice. Newborns with clinical jaundice were recalled and serum bilirubin levels measured. Appropriate treatment based on serum bilirubin level was performed. Alkaline phosphatase levels between the non-jaundiced and jaundiced treated neonates were compared. Results: The incidence of severe jaundice that required treatment among followed-up neonates was 9.8%. The mean alkaline phosphatase level was 309.09 ± 82.51 IU/L in the non-jaundiced group and 367.80 ± 73.82 IU/L in the severely jaundiced group (P = 0.040). The cutoff value of 314 IU/L was associated with sensitivity 80% and specificity 63% for predicting neonatal hyperbilirubinemia requiring treatment. Conclusions: The cord blood alkaline phosphatase level can be used as a predictor of severe neonatal jaundice.
Resumo:
The concept of social carrying capacity, though opens to debate and critique, is a valuable tool that enhances the management of recreational use in protected natural areas. In this study, conducted in Sierra de las Nieves natural park (Spain), we first categorised the hikers making use of the park and then, from the profiles obtained, analysed their perception of crowding on the trails. This assessment was subsequently used to assess levels of user satisfaction and thus to determine the psychosocial carrying capacity of the park. The results obtained can be extrapolated to most of the Spanish natural parks in Mediterranean mountain areas, due to their comparable levels of visitor numbers and to the prevalence of recreational hiking use. The results suggest that management efforts should be directed toward relocating trails outside the core areas, such that user preferences may be satisfied while less impact is made on the areas of highest environmental value.
Resumo:
Early water resources modeling efforts were aimed mostly at representing hydrologic processes, but the need for interdisciplinary studies has led to increasing complexity and integration of environmental, social, and economic functions. The gradual shift from merely employing engineering-based simulation models to applying more holistic frameworks is an indicator of promising changes in the traditional paradigm for the application of water resources models, supporting more sustainable management decisions. This dissertation contributes to application of a quantitative-qualitative framework for sustainable water resources management using system dynamics simulation, as well as environmental systems analysis techniques to provide insights for water quality management in the Great Lakes basin. The traditional linear thinking paradigm lacks the mental and organizational framework for sustainable development trajectories, and may lead to quick-fix solutions that fail to address key drivers of water resources problems. To facilitate holistic analysis of water resources systems, systems thinking seeks to understand interactions among the subsystems. System dynamics provides a suitable framework for operationalizing systems thinking and its application to water resources problems by offering useful qualitative tools such as causal loop diagrams (CLD), stock-and-flow diagrams (SFD), and system archetypes. The approach provides a high-level quantitative-qualitative modeling framework for "big-picture" understanding of water resources systems, stakeholder participation, policy analysis, and strategic decision making. While quantitative modeling using extensive computer simulations and optimization is still very important and needed for policy screening, qualitative system dynamics models can improve understanding of general trends and the root causes of problems, and thus promote sustainable water resources decision making. Within the system dynamics framework, a growth and underinvestment (G&U) system archetype governing Lake Allegan's eutrophication problem was hypothesized to explain the system's problematic behavior and identify policy leverage points for mitigation. A system dynamics simulation model was developed to characterize the lake's recovery from its hypereutrophic state and assess a number of proposed total maximum daily load (TMDL) reduction policies, including phosphorus load reductions from point sources (PS) and non-point sources (NPS). It was shown that, for a TMDL plan to be effective, it should be considered a component of a continuous sustainability process, which considers the functionality of dynamic feedback relationships between socio-economic growth, land use change, and environmental conditions. Furthermore, a high-level simulation-optimization framework was developed to guide watershed scale BMP implementation in the Kalamazoo watershed. Agricultural BMPs should be given priority in the watershed in order to facilitate cost-efficient attainment of the Lake Allegan's TP concentration target. However, without adequate support policies, agricultural BMP implementation may adversely affect the agricultural producers. Results from a case study of the Maumee River basin show that coordinated BMP implementation across upstream and downstream watersheds can significantly improve cost efficiency of TP load abatement.
Resumo:
By employing interpretive policy analysis this thesis aims to assess, measure, and explain policy capacity for government and non-government organizations involved in reclaiming Alberta's oil sands. Using this type of analysis to assess policy capacity is a novel approach for understanding reclamation policy; and therefore, this research will provide a unique contribution to the literature surrounding reclamation policy. The oil sands region in northeast Alberta, Canada is an area of interest for a few reasons; primarily because of the vast reserves of bitumen and the environmental cost associated with developing this resource. An increase in global oil demand has established incentive for industry to seek out and develop new reserves. Alberta's oil sands are one of the largest remaining reserves in the world, and there is significant interest in increasing production in this region. Furthermore, tensions in several oil exporting nations in the Middle East remain unresolved, and this has garnered additional support for a supply side solution to North American oil demands. This solution relies upon the development of reserves in both the United States and Canada. These compounding factors have contributed to the increased development in the oil sands of northeastern Alberta. Essentially, a rapid expansion of oil sands operations is ongoing, and is the source of significant disturbance across the region. This disturbance, and the promises of reclamation, is a source of contentious debates amongst stakeholders and continues to be highly visible in the media. If oil sands operations are to retain their social license to operate, it is critical that reclamation efforts be effective. One concern non-governmental organizations (NGOs) expressed criticizes the current monitoring and enforcement of regulatory programs in the oil sands. Alberta's NGOs have suggested the data made available to them originates from industrial sources, and is generally unchecked by government. In an effort to discern the overall status of reclamation in the oil sands this study explores several factors essential to policy capacity: work environment, training, employee attitudes, perceived capacity, policy tools, evidence based work, and networking. Data was collected through key informant interviews with senior policy professionals in government and non-government agencies in Alberta. The following are agencies of interest in this research: Canadian Association of Petroleum Producers (CAPP); Alberta Environment and Sustainable Resource Development (AESRD); Alberta Energy Regulator (AER); Cumulative Environmental Management Association (CEMA); Alberta Environment Monitoring, Evaluation, and Reporting Agency (AEMERA); Wood Buffalo Environmental Association (WBEA). The aim of this research is to explain how and why reclamation policy is conducted in Alberta's oil sands. This will illuminate government capacity, NGO capacity, and the interaction of these two agency typologies. In addition to answering research questions, another goal of this project is to show interpretive analysis of policy capacity can be used to measure and predict policy effectiveness. The oil sands of Alberta will be the focus of this project, however, future projects could focus on any government policy scenario utilizing evidence-based approaches.
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
Resumo:
Frequency, time and places of charging and discharging have critical impact on the Quality of Experience (QoE) of using Electric Vehicles (EVs). EV charging and discharging scheduling schemes should consider both the QoE of using EV and the load capacity of the power grid. In this paper, we design a traveling plan-aware scheduling scheme for EV charging in driving pattern and a cooperative EV charging and discharging scheme in parking pattern to improve the QoE of using EV and enhance the reliability of the power grid. For traveling planaware scheduling, the assignment of EVs to Charging Stations (CSs) is modeled as a many-to-one matching game and the Stable Matching Algorithm (SMA) is proposed. For cooperative EV charging and discharging in parking pattern, the electricity exchange between charging EVs and discharging EVs in the same parking lot is formulated as a many-to-many matching model with ties, and we develop the Pareto Optimal Matching Algorithm (POMA). Simulation results indicates that the SMA can significantly improve the average system utility for EV charging in driving pattern, and the POMA can increase the amount of electricity offloaded from the grid which is helpful to enhance the reliability of the power grid.
Resumo:
We quantified the ecosystem effects of small-scale gears operating in southern European waters (Portugal, Spain, Greece), based on a widely accepted ecosystem measure and indicator, the trophic level (TL). We used data from experimental fishing trials during 1997 to 2000. We studied a wide range of gear types and sizes: (1) gill nets of 8 mesh sizes, ranging from 44 to 80 mm; (2) trammel nets of 9 inner panel mesh sizes, ranging from 40 to 140 mm; and (3) longlines of 8 hook sizes, ranging from Nos. 15 (small) to 5 (large). We used the number of species caught per TL class for constructing trophic signatures (i.e. cumulative TL distributions), and estimated the TL at 25, 50 and 75% cumulative frequency (TL25, TL50, TL75) and the slopes using the logistic function. We also estimated the mean weighted TL of the catches (TLW). Our analyses showed that the TL characteristics of longlines varied much more than those of gill and trammel nets. The longlines of large hooks (Nos. 10, 9, 7, 5) were very TL selective, and their trophic signatures had very steep slopes, the highest mean TL50 values, very narrow mean TL25 to TL75 ranges and mean TLW > 4. In addition, the mean number of TL classes exploited was smaller and the mean TL50 and TLW were larger for the longlines of small hooks (Nos. 15, 13, 12, 11) in Greek than in Portuguese waters. Trammel and gill nets caught more TL classes, and the mean slopes of their trophic signatures were significantly smaller than those of longlines as a group. In addition, the mean number of TL classes exploited, the mean TL50 and the TLW of gill nets were significantly smaller than those of trammel nets. We attribute the differences between longlines of small hooks to bait type, and the differences between all gear types to their characteristic species and size-selectivity patterns. Finally, we showed how the slope and the TL50 Of the trophic signatures can be used to characterise different gears along the ecologically 'unsustainable-sustainable' continuum.
Resumo:
Background and aims: Advances in modern medicine have led to improved outcomes after stroke, yet an increased treatment burden has been placed on patients. Treatment burden is the workload of health care for people with chronic illness and the impact that this has on functioning and well-being. Those with comorbidities are likely to be particularly burdened. Excessive treatment burden can negatively affect outcomes. Individuals are likely to differ in their ability to manage health problems and follow treatments, defined as patient capacity. The aim of this thesis was to explore the experience of treatment burden for people who have had a stroke and the factors that influence patient capacity. Methods: There were four phases of research. 1) A systematic review of the qualitative literature that explored the experience of treatment burden for those with stroke. Data were analysed using framework synthesis, underpinned by Normalisation Process Theory (NPT). 2) A cross-sectional study of 1,424,378 participants >18 years, demographically representative of the Scottish population. Binary logistic regression was used to analyse the relationship between stroke and the presence of comorbidities and prescribed medications. 3) Interviews with twenty-nine individuals with stroke, fifteen analysed by framework analysis underpinned by NPT and fourteen by thematic analysis. The experience of treatment burden was explored in depth along with factors that influence patient capacity. 4) Integration of findings in order to create a conceptual model of treatment burden and patient capacity in stroke. Results: Phase 1) A taxonomy of treatment burden in stroke was created. The following broad areas of treatment burden were identified: making sense of stroke management and planning care; interacting with others including health professionals, family and other stroke patients; enacting management strategies; and reflecting on management. Phase 2) 35,690 people (2.5%) had a diagnosis of stroke and of the 39 co-morbidities examined, 35 were significantly more common in those with stroke. The proportion of those with stroke that had >1 additional morbidities present (94.2%) was almost twice that of controls (48%) (odds ratio (OR) adjusted for age, gender and socioeconomic deprivation; 95% confidence interval: 5.18; 4.95-5.43) and 34.5% had 4-6 comorbidities compared to 7.2% of controls (8.59; 8.17-9.04). In the stroke group, 12.6% of people had a record of >11 repeat prescriptions compared to only 1.5% of the control group (OR adjusted for age, gender, deprivation and morbidity count: 15.84; 14.86-16.88). Phase 3) The taxonomy of treatment burden from Phase 1 was verified and expanded. Additionally, treatment burdens were identified as arising from either: the workload of healthcare; or the endurance of care deficiencies. A taxonomy of patient capacity was created. Six factors were identified that influence patient capacity: personal attributes and skills; physical and cognitive abilities; support network; financial status; life workload, and environment. A conceptual model of treatment burden was created. Healthcare workload and the presence of care deficiencies can influence and be influenced by patient capacity. The quality and configuration of health and social care services influences healthcare workload, care deficiencies and patient capacity. Conclusions: This thesis provides important insights into the considerable treatment burden experienced by people who have had a stroke and the factors that affect their capacity to manage health. Multimorbidity and polypharmacy are common in those with stroke and levels of these are high. Findings have important implications for the design of clinical guidelines and healthcare delivery, for example co-ordination of care should be improved, shared decision-making enhanced, and patients better supported following discharge from hospital.
Resumo:
Montado decline has been reported since the end of the nineteenth century in southern Portugal and increased markedly during the 1980s. Consensual reports in the literature suggest that this decline is due to a number of factors, such as environmental constraints, forest diseases, inappropriate management, and socioeconomic issues. An assessment on the pattern of montado distribution was conducted to reveal how the extent of land management, environmental variables, and spatial factors contributed to montado area loss in southern Portugal from 1990 to 2006. A total of 14 independent variables, presumably related to montado loss, were grouped into three sets: environmental variables, land management variables, and spatial variables. From 1990 to 2006, approximately 90,054 ha disappeared in the montado area, with an estimated annual regression rate of 0.14 % year-1. Variation partitioning showed that the land management model accounted for the highest percentage of explained variance (51.8 %), followed by spatial factors (44.6 %) and environmental factors (35.5 %). These results indicate that most variance in the large-scale distribution of recent montado loss is due to land management, either alone or in combination with environmental and spatial factors. The full GAM model showed that different livestock grazing is one of the most important variables affecting montado loss. This suggests that optimum carrying capacity should decrease to 0.18–0.60 LU ha-1 for livestock grazing in montado under current ecological conditions in southern Portugal.