865 resultados para Filmic approach methods
Resumo:
Objectives: This study provides the first large scale analysis of the age at which adolescents in medieval England entered and completed the pubertal growth spurt. This new method has implications for expanding our knowledge of adolescent maturation across different time periods and regions. Methods: In total, 994 adolescent skeletons (10-25 years) from four urban sites in medieval England (AD 900-1550) were analysed for evidence of pubertal stage using new osteological techniques developed from the clinical literature (i.e. hamate hook development, CVM, canine mineralisation, iliac crest ossification, radial fusion). Results: Adolescents began puberty at a similar age to modern children at around 10-12 years, but the onset of menarche in girls was delayed by up to 3 years, occurring around 15 for most in the study sample and 17 years for females living in London. Modern European males usually complete their maturation by 16-18 years; medieval males took longer with the deceleration stage of the growth spurt extending as late as 21 years. Conclusions: This research provides the first attempt to directly assess the age of pubertal development in adolescents during the tenth to seventeenth centuries. Poor diet, infections, and physical exertion may have contributed to delayed development in the medieval adolescents, particularly for those living in the city of London. This study sheds new light on the nature of adolescence in the medieval period, highlighting an extended period of physical and social transition.
Resumo:
The role and function of a given protein is dependent on its structure. In recent years, however, numerous studies have highlighted the importance of unstructured, or disordered regions in governing a protein’s function. Disordered proteins have been found to play important roles in pivotal cellular functions, such as DNA binding and signalling cascades. Studying proteins with extended disordered regions is often problematic as they can be challenging to express, purify and crystallise. This means that interpretable experimental data on protein disorder is hard to generate. As a result, predictive computational tools have been developed with the aim of predicting the level and location of disorder within a protein. Currently, over 60 prediction servers exist, utilizing different methods for classifying disorder and different training sets. Here we review several good performing, publicly available prediction methods, comparing their application and discussing how disorder prediction servers can be used to aid the experimental solution of protein structure. The use of disorder prediction methods allows us to adopt a more targeted approach to experimental studies by accurately identifying the boundaries of ordered protein domains so that they may be investigated separately, thereby increasing the likelihood of their successful experimental solution.
Resumo:
Of the many sources of urban greenhouse gas (GHG) emissions, solid waste is the only one for which management decisions are undertaken primarily by municipal governments themselves and is hence often the largest component of cities’ corporate inventories. It is essential that decision-makers select an appropriate quantification methodology and have an appreciation of methodological strengths and shortcomings. This work compares four different waste emissions quantification methods, including Intergovernmental Panel on Climate Change (IPCC) 1996 guidelines, IPCC 2006 guidelines, U.S. Environmental Protection Agency (EPA) Waste Reduction Model (WARM), and the Federation of Canadian Municipalities- Partners for Climate Protection (FCM-PCP) quantification tool. Waste disposal data for the greater Toronto area (GTA) in 2005 are used for all methodologies; treatment options (including landfill, incineration, compost, and anaerobic digestion) are examined where available in methodologies. Landfill was shown to be the greatest source of GHG emissions, contributing more than three-quarters of total emissions associated with waste management. Results from the different landfill gas (LFG) quantification approaches ranged from an emissions source of 557 kt carbon dioxide equivalents (CO2e) (FCM-PCP) to a carbon sink of −53 kt CO2e (EPA WARM). Similar values were obtained between IPCC approaches. The IPCC 2006 method was found to be more appropriate for inventorying applications because it uses a waste-in-place (WIP) approach, rather than a methane commitment (MC) approach, despite perceived onerous data requirements for WIP. MC approaches were found to be useful from a planning standpoint; however, uncertainty associated with their projections of future parameter values limits their applicability for GHG inventorying. MC and WIP methods provided similar results in this case study; however, this is case specific because of similarity in assumptions of present and future landfill parameters and quantities of annual waste deposited in recent years being relatively consistent.
Resumo:
Background 29 autoimmune diseases, including Rheumatoid Arthritis, gout, Crohn’s Disease, and Systematic Lupus Erythematosus affect 7.6-9.4% of the population. While effective therapy is available, many patients do not follow treatment or use medications as directed. Digital health and Web 2.0 interventions have demonstrated much promise in increasing medication and treatment adherence, but to date many Internet tools have proven disappointing. In fact, most digital interventions continue to suffer from high attrition in patient populations, are burdensome for healthcare professionals, and have relatively short life spans. Objective Digital health tools have traditionally centered on the transformation of existing interventions (such as diaries, trackers, stage-based or cognitive behavioral therapy programs, coupons, or symptom checklists) to electronic format. Advanced digital interventions have also incorporated attributes of Web 2.0 such as social networking, text messaging, and the use of video. Despite these efforts, there has not been little measurable impact in non-adherence for illnesses that require medical interventions, and research must look to other strategies or development methodologies. As a first step in investigating the feasibility of developing such a tool, the objective of the current study is to systematically rate factors of non-adherence that have been reported in past research studies. Methods Grounded Theory, recognized as a rigorous method that facilitates the emergence of new themes through systematic analysis, data collection and coding, was used to analyze quantitative, qualitative and mixed method studies addressing the following autoimmune diseases: Rheumatoid Arthritis, gout, Crohn’s Disease, Systematic Lupus Erythematosus, and inflammatory bowel disease. Studies were only included if they contained primary data addressing the relationship with non-adherence. Results Out of the 27 studies, four non-modifiable and 11 modifiable risk factors were discovered. Over one third of articles identified the following risk factors as common contributors to medication non-adherence (percent of studies reporting): patients not understanding treatment (44%), side effects (41%), age (37%), dose regimen (33%), and perceived medication ineffectiveness (33%). An unanticipated finding that emerged was the need for risk stratification tools (81%) with patient-centric approaches (67%). Conclusions This study systematically identifies and categorizes medication non-adherence risk factors in select autoimmune diseases. Findings indicate that patients understanding of their disease and the role of medication are paramount. An unexpected finding was that the majority of research articles called for the creation of tailored, patient-centric interventions that dispel personal misconceptions about disease, pharmacotherapy, and how the body responds to treatment. To our knowledge, these interventions do not yet exist in digital format. Rather than adopting a systems level approach, digital health programs should focus on cohorts with heterogeneous needs, and develop tailored interventions based on individual non-adherence patterns.
Resumo:
Replacement, expansion and upgrading of assets in the electricity network represents financial investment for the distribution utilities. Network Investment Deferral (NID) is a well discussed benefit of wider adoption of Distributed Generation (DG). There have been many attempts to quantify and evaluate the financial benefit for the distribution utilities. While the carbon benefits of NID are commonly mentioned, there is little attempt to quantify these impacts. This paper explores the quantitative methods previously used to evaluate financial benefits in order to discuss the carbon impacts. These carbon impacts are important for companies owning DG equipment for internal reporting and emissions reductions ambitions. Currently, a GB wide approach is taken as a means for discussing more regional and local methods to be used in future work. By investigating these principles, the paper offers a novel approach to quantifying carbon emissions from various DG technologies.
Resumo:
While a growing number of small- and medium-sized enterprises (SMEs) are making use of coaching, little is known about the impact such coaching has within this sector. This study sought to identify the factors that influence managers' decision to engage with coaching, their perceptions of the coaching ‘journey’ and the kinds of benefits accruing from coaching: organisational, personal or both. As part of a mixed methods approach, a survey tool was developed based upon a range of relevant management competencies from the UK's Management Occupational Standards and responses analysed using importance-performance analysis, an approach first used in the marketing sector to evaluate customer satisfaction. Results indicate that coaching had a significant impact on personal attributes such as ‘Managing Self-Cognition’ and ‘Managing Self-Emotional’, whereas the impact on business-oriented attributes was weaker. Managers' choice of coaches with psychotherapeutic rather than non-psychotherapeutic backgrounds was also statistically significant. We conclude that even in the competitive business environment of SMEs, coaching was used as a largely personal, therapeutic intervention rather than to build business-oriented competencies.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
Purpose This research explored the use of developmental evaluation methods with community of practice programmes experiencing change or transition to better understand how to target support resources. Design / methodology / approach The practical use of a number of developmental evaluation methods was explored in three organisations over a nine month period using an action research design. The research was a collaborative process involving all the company participants and the academic (the author) with the intention of developing the practices of the participants as well as contributing to scholarship. Findings The developmental evaluation activities achieved the objectives of the knowledge managers concerned: they developed a better understanding of the contribution and performance of their communities of practice, allowing support resources to be better targeted. Three methods (fundamental evaluative thinking, actual-ideal comparative method and focus on strengths and assets) were found to be useful. Cross-case analysis led to the proposition that developmental evaluation methods act as a structural mechanism that develops the discourse of the organisation in ways that enhance the climate for learning, potentially helping develop a learning organization. Practical implications Developmental evaluation methods add to the options available to evaluate community of practice programmes. These supplement the commonly used activity indicators and impact story methods. 2 Originality / value Developmental evaluation methods are often used in social change initiatives, informing public policy and funding decisions. The contribution here is to extend their use to organisational community of practice programmes.
Resumo:
Aim To compare the remodeling of the alveolar process at implants installed immediately into extraction sockets by applying a flap or a ""flapless"" surgical approach in a dog model. Material and methods Implants were installed immediately into the distal alveoli of the second mandibular premolars of six Labrador dogs. In one side of the mandible, a full-thickness mucoperiosteal flap was elevated (control site), while contra-laterally, the mucosa was gently dislocated, but not elevated (test site) to disclose the alveolar crest. After 4 months of healing, the animals were sacrificed, ground sections were obtained and a histomorphometric analysis was performed. Results After 4 months of healing, all implants were integrated (n=6). Both at the test and at the control sites, bone resorption occurred with similar outcomes. The buccal bony crest resorption was 1.7 and 1.5 mm at the control and the test sites, respectively. Conclusions ""Flapless"" implant placement into extraction sockets did not result in the prevention of alveolar bone resorption and did not affect the dimensional changes of the alveolar process following tooth extraction when compared with the usual placement of implants raising mucoperiosteal flaps. To cite this article:Caneva M, Botticelli D, Salata LA, Souza SLS, Bressan E, Lang NP. Flap vs. ""flapless"" surgical approach at immediate implants: a histomorphometric study in dogs.Clin. Oral Impl. Res. 21, 2010; 1314-1319.doi: 10.1111/j.1600-0501.2009.01959.x.
Resumo:
Diverse invertebrate and vertebrate species live in association with plants of the large Neotropical family Bromeliaceae. Although previous studies have assumed that debris of associated organisms improves plant nutrition, so far little evidence supports this assumption. In this study we used isotopic ((15)N) and physiological methods to investigate if the treefrog Scinax hayii, which uses the tank epiphytic bromeliad Vriesea bituminosa as a diurnal shelter, contributes to host plant nutrition. In the field, bromeliads with frogs had higher stable N isotopic composition (delta(15)N) values than those without frogs. Similar results were obtained from a controlled greenhouse experiment. Linear mixing models showed that frog feces and dead termites used to simulate insects that eventually fall inside the bromeliad tank contributed, respectively, 27.7% (+/- 0.07 SE) and 49.6% (+/- 0.50 SE) of the total N of V. bituminosa. Net photosynthetic rate was higher in plants that received feces and termites than in controls; however, this effect was only detected in the rainy, but not in the dry season. These results demonstrate for the first time that vertebrates contribute to bromeliad nutrition, and that this benefit is seasonally restricted. Since amphibian-bromeliad associations occur in diverse habitats in South and Central America, this mechanism for deriving nutrients may be important in bromeliad systems throughout the Neotropics.
Resumo:
The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we compare the performance of two statistical approaches for the analysis of data obtained from the social research area. In the first approach, we use normal models with joint regression modelling for the mean and for the variance heterogeneity. In the second approach, we use hierarchical models. In the first case, individual and social variables are included in the regression modelling for the mean and for the variance, as explanatory variables, while in the second case, the variance at level 1 of the hierarchical model depends on the individuals (age of the individuals), and in the level 2 of the hierarchical model, the variance is assumed to change according to socioeconomic stratum. Applying these methodologies, we analyze a Colombian tallness data set to find differences that can be explained by socioeconomic conditions. We also present some theoretical and empirical results concerning the two models. From this comparative study, we conclude that it is better to jointly modelling the mean and variance heterogeneity in all cases. We also observe that the convergence of the Gibbs sampling chain used in the Markov Chain Monte Carlo method for the jointly modeling the mean and variance heterogeneity is quickly achieved.
Resumo:
There is an increasing interest in the application of Evolutionary Algorithms (EAs) to induce classification rules. This hybrid approach can benefit areas where classical methods for rule induction have not been very successful. One example is the induction of classification rules in imbalanced domains. Imbalanced data occur when one or more classes heavily outnumber other classes. Frequently, classical machine learning (ML) classifiers are not able to learn in the presence of imbalanced data sets, inducing classification models that always predict the most numerous classes. In this work, we propose a novel hybrid approach to deal with this problem. We create several balanced data sets with all minority class cases and a random sample of majority class cases. These balanced data sets are fed to classical ML systems that produce rule sets. The rule sets are combined creating a pool of rules and an EA is used to build a classifier from this pool of rules. This hybrid approach has some advantages over undersampling, since it reduces the amount of discarded information, and some advantages over oversampling, since it avoids overfitting. The proposed approach was experimentally analysed and the experimental results show an improvement in the classification performance measured as the area under the receiver operating characteristics (ROC) curve.
Resumo:
In this paper we provide a complete algebraic invariant of link-homotopy, that is, an algebraic invariant that distinguishes two links if and only if they are link-homotopic. The paper establishes a connection between the ""peripheral structures"" approach to link-homotopy taken by Milnor, Levine and others, and the string link action approach taken by Habegger and Lin. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The constrained compartmentalized knapsack problem can be seen as an extension of the constrained knapsack problem. However, the items are grouped into different classes so that the overall knapsack has to be divided into compartments, and each compartment is loaded with items from the same class. Moreover, building a compartment incurs a fixed cost and a fixed loss of the capacity in the original knapsack, and the compartments are lower and upper bounded. The objective is to maximize the total value of the items loaded in the overall knapsack minus the cost of the compartments. This problem has been formulated as an integer non-linear program, and in this paper, we reformulate the non-linear model as an integer linear master problem with a large number of variables. Some heuristics based on the solution of the restricted master problem are investigated. A new and more compact integer linear model is also presented, which can be solved by a branch-and-bound commercial solver that found most of the optimal solutions for the constrained compartmentalized knapsack problem. On the other hand, heuristics provide good solutions with low computational effort. (C) 2011 Elsevier BM. All rights reserved.