921 resultados para Resource based view
Resumo:
Introduction Quantitative and accurate measurements of fat and muscle in the body are important for prevention and diagnosis of diseases related to obesity and muscle degeneration. Manually segmenting muscle and fat compartments in MR body-images is laborious and time-consuming, hindering implementation in large cohorts. In the present study, the feasibility and success-rate of a Dixon-based MR scan followed by an intensity-normalised, non-rigid, multi-atlas based segmentation was investigated in a cohort of 3,000 subjects. Materials and Methods 3,000 participants in the in-depth phenotyping arm of the UK Biobank imaging study underwent a comprehensive MR examination. All subjects were scanned using a 1.5 T MR-scanner with the dual-echo Dixon Vibe protocol, covering neck to knees. Subjects were scanned with six slabs in supine position, without localizer. Automated body composition analysis was performed using the AMRA Profiler™ system, to segment and quantify visceral adipose tissue (VAT), abdominal subcutaneous adipose tissue (ASAT) and thigh muscles. Technical quality assurance was performed and a standard set of acceptance/rejection criteria was established. Descriptive statistics were calculated for all volume measurements and quality assurance metrics. Results Of the 3,000 subjects, 2,995 (99.83%) were analysable for body fat, 2,828 (94.27%) were analysable when body fat and one thigh was included, and 2,775 (92.50%) were fully analysable for body fat and both thigh muscles. Reasons for not being able to analyse datasets were mainly due to missing slabs in the acquisition, or patient positioned so that large parts of the volume was outside of the field-of-view. Discussion and Conclusions In conclusion, this study showed that the rapid UK Biobank MR-protocol was well tolerated by most subjects and sufficiently robust to achieve very high success-rate for body composition analysis. This research has been conducted using the UK Biobank Resource.
Resumo:
Objective: To examine the effectiveness of an “Enhancing Positive Emotions Procedure” (EPEP) based on positive psychology and cognitive behavioral therapy in relieving distress at the time of adjuvant chemotherapy treatment in colorectal cancer patients (CRC). It is expected that EPEP will increase quality of life and positive affect in CRC patients during chemotherapy treatment intervention and at 1 month follow-up.Method: A group of 24 CRC patients received the EPEP procedure (intervention group), whereas another group of 20 CRC patients did not receive the EPEP (control group). Quality of life (EORTC-QLQC30), and mood (PANAS) were assessed in three moments: prior to enter the study (T1), at the end of the time required to apply the EPEP (T2, 6 weeks after T1), and, at follow-up (T3, one-month after T2). Patient’s assessments of the EPEP (improving in mood states, and significance of the attention received) were assessed with Lickert scales.Results: Insomnia was reduced in the intervention group. Treatment group had better scores on positive affect although there were no significantly differences between groups and over time. There was a trend to better scores at T2 and T3 for the intervention group on global health status, physical, role, and social functioning scales. Patients stated that positive mood was enhanced and that EPEP was an important resource.Conclusions: CRC patients receiving EPEP during chemotherapy believed that this intervention was important. Furthermore, EPEP seems to improve positive affect and quality of life. EPEP has potential benefits, and its implementation to CRC patients should be considered.
Resumo:
This paper examines the methodological choices of researchers studying the HR practices–outcome relationship via a content analysis of 281 studies published across the last twenty years. The prevalence and trajectory of change over time are reported for a wide range of methodological choices relevant to internal, external, construct, and statistical conclusion validity. While the results indicate a high incidence of potentially problematic cross-sectional, single informant, and single level designs, they also reveal significant improvements over time across many validity relevant methodological choices. This broad based improvement in the methodological underpinnings of HR research suggests that researchers and practitioners can view the findings reported in the HR literature with increasing confidence. Directions for future research are provided.
Resumo:
Aim To evaluate the effect of regional implementation of a preconception counselling resource into routine diabetes care on pregnancy planning indicators. Methods A preconception counselling DVD was distributed to women by diabetes care teams and general practices. Subsequently, in a prospective population-based study, pregnancy planning indicators were evaluated. The post-DVD cohort (n = 135), including a viewed-DVD subgroup (n = 58), were compared with an historical cohort (pre-DVD, n = 114). Primary outcome was HbA1c at first diabetes-antenatal visit. Secondary outcomes included preconception folic acid consumption, planned pregnancy and HbA1c recorded in the 6 months preconception. Results Mean first visit HbA1c was lower post-DVD vs. pre-DVD: 7.5% vs. 7.8% [58.4 vs. 61.8 mmol/mol]; p = 0.12), although not statistically significant. 53% and 20% of women with type 1 and 2 diabetes, respectively, viewed the DVD. The viewed-DVD subgroup were significantly more likely to have lower first visit HbA1c: 6.9% vs. 7.8% [52.1 vs. 61.8 mmol/mol], P < 0.001; planned pregnancy (88% vs. 59%, P < 0.001); taken folic acid preconception (81% vs. 43%, P = 0.001); and had HbA1c recorded preconception (88% vs. 53%, P < 0.001) than the pre-DVD cohort. Conclusions Implementation of a preconception counselling resource was associated with improved pregnancy planning indicators. Women with type 2 diabetes are difficult to reach. Greater awareness within primary care of the importance of preconception counselling among this population is needed.
Resumo:
Local communities collectively managing common pool resources can play an important role in sustainable management, but they often lack the skills and context-specific tools required for such management. The complex dynamics of social-ecological systems (SES), the need for management capacities, and communities’ limited empowerment and participation skills present challenges for community-based natural resource management (CBNRM) strategies. We analyzed the applicability of prospective structural analysis (PSA), a strategic foresight tool, to support decision making and to foster sustainable management and capacity building in CBNRM contexts and the modifications necessary to use the tool in such contexts. By testing PSA in three SES in Colombia, Mexico, and Argentina, we gathered information regarding the potential of this tool and its adaptation requirements. The results suggest that the tool can be adapted to these contexts and contribute to fostering sustainable management and capacity building. It helped identify the systems’ dynamics, thus increasing the communities’ knowledge about their SES and informing the decision-making process. Additionally, it drove a learning process that both fostered empowerment and built participation skills. The process demanded both time and effort, and required external monitoring and facilitation, but community members could be trained to master it. Thus, we suggest that the PSA technique has the potential to strengthen CBNRM and that other initiatives could use it, but they must be aware of these requirements.
Resumo:
With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.
Resumo:
Syntactic logics do not suffer from the problems of logical omniscience but are often thought to lack interesting properties relating to epistemic notions. By focusing on the case of rule-based agents, I develop a framework for modelling resource-bounded agents and show that the resulting models have a number of interesting properties.
Resumo:
Tese de Doutoramento, Ciências do Ambiente (Ordenamento do Território), 5 de Abril de 2013, Universidade dos Açores.
Resumo:
Municipal Solid Waste is one of the biggest challenges that cities are facing: MSW is considered of the main sources of energy consumption, urban degradation and pollution. This paper defines the major negative effects of MSW on cities and proposes new solutions to guide waste policies. Most contemporary waste management efforts are focused at regional government level and based on high tech waste disposal by methods such as landfill and incineration. However, these methods are becoming increasingly expensive, energy inefficient and pollutant: waste disposal is not sustainable and will have negative implications for future generations. In this paper are proposed all the principle solutions that could be undertaken. New policy instruments are presented updating and adapting policies and encouraging innovation for less wasteful systems. Waste management plans are fundamental to increase the ability of urban areas effectively to adapt to waste challenges. These plans have to give an outline of waste streams and treatment options and provide a scenario for the following years that significantly reduce landfills and incinerators in favor of prevention, reuse and recycling. The key aim of an urban waste management plan is to set out the work towards a zero waste economy as part of the transition to a sustainable economy. Other questions remain still opened: How to change people’s behavior? What is the role of environmental education and risk perception? It is sure that the involvement of the various stakeholders and the wider public in the planning process should aim at ensuring acceptance of the waste policy.
Resumo:
Objectives: To identify reasons for neonatal admission and death with the aim of determining areas needing improvement. Method: A retrospective chart review was conducted on records for neonates admitted to Mulago National Referral Hospital Special Care Baby Unit (SCBU) from 1st November 2013 to 31st January 2014. Final diagnosis was generated after analyzing sequence of clinical course by 2 paediatricians. Results: A total of 1192 neonates were admitted. Majority 83.3% were in-born. Main reasons for admissions were prematurity (37.7%) and low APGAR (27.9%).Overall mortality was 22.1% (Out-born 33.6%; in born 19.8%). Half (52%) of these deaths occurred in the first 24 hours of admission. Major contributors to mortality were prematurity with hypothermia and respiratory distress (33.7%) followed by birth asphyxia with HIE grade III (24.6%) and presumed sepsis (8.7%). Majority of stable at risk neonates 318/330 (i.e. low APGAR or prematurity without comorbidity) survived. Factors independently associated with death included gestational age <30 weeks (p 0.002), birth weight <1500g (p 0.007) and a 5 minute APGAR score of < 7 (p 0.001). Neither place of birth nor delayed and after hour admissions were independently associated with mortality. Conclusion and recommendations: Mortality rate in SCBU is high. Prematurity and its complications were major contributors to mortality. The management of hypothermia and respiratory distress needs scaling up. A step down unit for monitoring stable at risk neonates is needed in order to decongest SCBU.
Resumo:
Libraries since their inception 4000 years ago have been in a process of constant change. Although, changes were in slow motion for centuries, in the last decades, academic libraries have been continuously striving to adapt their services to the ever-changing user needs of students and academic staff. In addition, e-content revolution, technological advances, and ever-shrinking budgets have obliged libraries to efficiently allocate their limited resources among collection and services. Unfortunately, this resource allocation is a complex process due to the diversity of data sources and formats required to be analyzed prior to decision-making, as well as the lack of efficient integration methods. The main purpose of this study is to develop an integrated model that supports libraries in making optimal budgeting and resource allocation decisions among their services and collection by means of a holistic analysis. To this end, a combination of several methodologies and structured approaches is conducted. Firstly, a holistic structure and the required toolset to holistically assess academic libraries are proposed to collect and organize the data from an economic point of view. A four-pronged theoretical framework is used in which the library system and collection are analyzed from the perspective of users and internal stakeholders. The first quadrant corresponds to the internal perspective of the library system that is to analyze the library performance, and costs incurred and resources consumed by library services. The second quadrant evaluates the external perspective of the library system; user’s perception about services quality is judged in this quadrant. The third quadrant analyses the external perspective of the library collection that is to evaluate the impact of the current library collection on its users. Eventually, the fourth quadrant evaluates the internal perspective of the library collection; the usage patterns followed to manipulate the library collection are analyzed. With a complete framework for data collection, these data coming from multiple sources and therefore with different formats, need to be integrated and stored in an adequate scheme for decision support. A data warehousing approach is secondly designed and implemented to integrate, process, and store the holistic-based collected data. Ultimately, strategic data stored in the data warehouse are analyzed and implemented for different purposes including the following: 1) Data visualization and reporting is proposed to allow library managers to publish library indicators in a simple and quick manner by using online reporting tools. 2) Sophisticated data analysis is recommended through the use of data mining tools; three data mining techniques are examined in this research study: regression, clustering and classification. These data mining techniques have been applied to the case study in the following manner: predicting the future investment in library development; finding clusters of users that share common interests and similar profiles, but belong to different faculties; and predicting library factors that affect student academic performance by analyzing possible correlations of library usage and academic performance. 3) Input for optimization models, early experiences of developing an optimal resource allocation model to distribute resources among the different processes of a library system are documented in this study. Specifically, the problem of allocating funds for digital collection among divisions of an academic library is addressed. An optimization model for the problem is defined with the objective of maximizing the usage of the digital collection over-all library divisions subject to a single collection budget. By proposing this holistic approach, the research study contributes to knowledge by providing an integrated solution to assist library managers to make economic decisions based on an “as realistic as possible” perspective of the library situation.
Resumo:
The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.
Resumo:
The length of stay of preterm infants in a neonatology service has become an issue of a growing concern, namely considering, on the one hand, the mothers and infants health conditions and, on the other hand, the scarce healthcare facilities own resources. Thus, a pro-active strategy for problem solving has to be put in place, either to improve the quality-of-service provided or to reduce the inherent financial costs. Therefore, this work will focus on the development of a diagnosis decision support system in terms of a formal agenda built on a Logic Programming approach to knowledge representation and reasoning, complemented with a case-based problem solving methodology to computing, that caters for the handling of incomplete, unknown, or even contradictory in-formation. The proposed model has been quite accurate in predicting the length of stay (overall accuracy of 84.9%) and by reducing the computational time with values around 21.3%.
Resumo:
Thrombophilia stands for a genetic or an acquired tendency to hypercoagulable states that increase the risk of venous and arterial thromboses. Indeed, venous thromboembolism is often a chronic illness, mainly in deep venous thrombosis and pulmonary embolism, requiring lifelong prevention strategies. Therefore, it is crucial to identify the cause of the disease, the most appropriate treatment, the length of treatment or prevent a thrombotic recurrence. Thus, this work will focus on the development of a diagnosis decision support system in terms of a formal agenda built on a logic programming approach to knowledge representation and reasoning, complemented with a case-based approach to computing. The proposed model has been quite accurate in the assessment of thrombophilia predisposition risk, since the overall accuracy is higher than 90% and sensitivity ranging in the interval [86.5%, 88.1%]. The main strength of the proposed solution is the ability to deal explicitly with incomplete, unknown, or even self-contradictory information.
Resumo:
A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs.