910 resultados para Context analysis
Resumo:
Small businesses are considered important engines for job growth and economic development by policy makers worldwide. One of the most commonly cited constraints of small businesses is a lack of access to capital. To address this constraint, small business loan guarantee programs have been established in over 100 countries. There are a variety of types of guarantee funds, with the most significant differences being which borrowers are eligible for guarantees, and how borrowers are approved for guarantees. There is currently no clear delineation between types of programs and the economic conditions they operate in, though some trends are becoming apparent. However, these trends may not be leading to the best economic outcomes possible. By better matching the structure of the guarantee fund to the economic conditions it operates in, the program’s success in meeting economic development goals may be greatly improved. Many programs in developing countries may not be taking advantage of bank expertise and may be limiting the scope of their effectiveness. At the same time, programs in developed countries may be wasting resources by scattering their efforts too thinly and subsidizing less competitive firms to the detriment of local economic development.
Resumo:
Sparse traffic grooming is a practical problem to be addressed in heterogeneous multi-vendor optical WDM networks where only some of the optical cross-connects (OXCs) have grooming capabilities. Such a network is called as a sparse grooming network. The sparse grooming problem under dynamic traffic in optical WDM mesh networks is a relatively unexplored problem. In this work, we propose the maximize-lightpath-sharing multi-hop (MLS-MH) grooming algorithm to support dynamic traffic grooming in sparse grooming networks. We also present an analytical model to evaluate the blocking performance of the MLS-MH algorithm. Simulation results show that MLSMH outperforms an existing grooming algorithm, the shortest path single-hop (SPSH) algorithm. The numerical results from analysis show that it matches closely with the simulation. The effect of the number of grooming nodes in the network on the blocking performance is also analyzed.
Resumo:
Artificial selection for starvation resistance provided insight into the relationships between evolved physiological and life history trait responses following exposure to biologically induced stress. Investigations of alterations to body composition, metabolic rate, movement, and life history traits including development time, female egg production, and longevity in response to brief periods of starvation were conducted on genetically based starvation-resistant and control lines of Drosophila melanogaster. Analysis of the starvation-resistant lines indicated increased energy storage with increased triglyceride deposition and conversion of carbohydrates to lipid, as identified by respiratory quotient values. Correlations between reductions in metabolic rates and movement in the starvation-resistant lines, suggested the presence of an evolved physiological response resulting in energy conservation. Investigations of life history traits in the starvation-resistant lines indicated no significant differences in development time or reproduction between the selected and control lines. Measurements of longevity, however, indicated a significant reduction in starvation-resistant D. melanogaster lifespan. These results suggested that elevated lipid concentrations, similar to that observed with obesity, were correlated with premature mortality. Exposure of the starvation-resistant and control lines to diets supplemented with glucose, palmitic acid, and a 2:1 mixture of casein to albumin were used to investigate alterations in body composition, movement, and life history traits. Results obtained from this study indicated that increased sugar in the diet led to increased carbohydrate, glycogen, total sugar, trehalose, and triglyceride concentrations, while increased fat and protein in the diet resulted in increased soluble protein, carbohydrate, glycogen, total sugar, and trehalose concentrations. Examination of life history trait responses indicated reduced fecundity in females exposed to increased glucose concentrations. Increased supplementations of palmitic acid was consistently correlated with an overall reduction in lifespan in both the starvation-resistant and control Drosophila lines, while measurements of movement indicated increased female activity levels in flies exposed to diets supplemented with fat and protein. Analyses of the physiological and life history trait responses to starvation and dietary supplementation on Drosophila melanogaster used in the present study has implications for investigating the mechanisms underlying the development and persistence of human obesity and associated metabolic disorders.
Resumo:
Most authors struggle to pick a title that adequately conveys all of the material covered in a book. When I first saw Applied Spatial Data Analysis with R, I expected a review of spatial statistical models and their applications in packages (libraries) from the CRAN site of R. The authors’ title is not misleading, but I was very pleasantly surprised by how deep the word “applied” is here. The first half of the book essentially covers how R handles spatial data. To some statisticians this may be boring. Do you want, or need, to know the difference between S3 and S4 classes, how spatial objects in R are organized, and how various methods work on the spatial objects? A few years ago I would have said “no,” especially to the “want” part. Just let me slap my EXCEL spreadsheet into R and run some spatial functions on it. Unfortunately, the world is not so simple, and ultimately we want to minimize effort to get all of our spatial analyses accomplished. The first half of this book certainly convinced me that some extra effort in organizing my data into certain spatial class structures makes the analysis easier and less subject to mistakes. I also admit that I found it very interesting and I learned a lot.
Resumo:
Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.
Resumo:
We propose a general framework for the analysis of animal telemetry data through the use of weighted distributions. It is shown that several interpretations of resource selection functions arise when constructed from the ratio of a use and availability distribution. Through the proposed general framework, several popular resource selection models are shown to be special cases of the general model by making assumptions about animal movement and behavior. The weighted distribution framework is shown to be easily extended to readily account for telemetry data that are highly auto-correlated; as is typical with use of new technology such as global positioning systems animal relocations. An analysis of simulated data using several models constructed within the proposed framework is also presented to illustrate the possible gains from the flexible modeling framework. The proposed model is applied to a brown bear data set from southeast Alaska.
Resumo:
1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modeling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modeling analysis engine for spatial and habitat-modeling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of- the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.
Resumo:
Static analysis tools report software defects that may or may not be detected by other verification methods. Two challenges complicating the adoption of these tools are spurious false positive warnings and legitimate warnings that are not acted on. This paper reports automated support to help address these challenges using logistic regression models that predict the foregoing types of warnings from signals in the warnings and implicated code. Because examining many potential signaling factors in large software development settings can be expensive, we use a screening methodology to quickly discard factors with low predictive power and cost-effectively build predictive models. Our empirical evaluation indicates that these models can achieve high accuracy in predicting accurate and actionable static analysis warnings, and suggests that the models are competitive with alternative models built without screening.
Resumo:
This chapter explores the characteristics of 114 American teenagers' Jewish identities using data from the National Study of Youth and Religion (NSYR). The NSYR includes a telephone survey of a nationally representative sample of 3,290 adolescents aged 13 to 17. Jewish teenagers were over-sampled, resulting in a total of 3,370 teenage participants. Of the NSYR teens surveyed, 141 have at least one Jewish parent and 114 of them identify as Jewish. The NSYR also includes in-depth face-to-face interviews with a total of 267 U.S. teens: 23 who have at least one Jewish parent and 18 who identify as Jewish. The following analysis draws upon quantitative data from the 114 teens who identified themselves as Jewish in the face-to-face interviews.
Resumo:
Not long ago, most software was written by professional programmers, who could be presumed to have an interest in software engineering methodologies and in tools and techniques for improving software dependability. Today, however, a great deal of software is written not by professionals but by end-users, who create applications such as multimedia simulations, dynamic web pages, and spreadsheets. Applications such as these are often used to guide important decisions or aid in important tasks, and it is important that they be sufficiently dependable, but evidence shows that they frequently are not. For example, studies have shown that a large percentage of the spreadsheets created by end-users contain faults. Despite such evidence, until recently, relatively little research had been done to help end-users create more dependable software. We have been working to address this problem by finding ways to provide at least some of the benefits of formal software engineering techniques to end-user programmers. In this talk, focusing on the spreadsheet application paradigm, I present several of our approaches, focusing on methodologies that utilize source-code-analysis techniques to help end-users build more dependable spreadsheets. Behind the scenes, our methodologies use static analyses such as dataflow analysis and slicing, together with dynamic analyses such as execution monitoring, to support user tasks such as validation and fault localization. I show how, to accommodate the user base of spreadsheet languages, an interface to these methodologies can be provided in a manner that does not require an understanding of the theory behind the analyses, yet supports the interactive, incremental process by which spreadsheets are created. Finally, I present empirical results gathered in the use of our methodologies that highlight several costs and benefits trade-offs, and many opportunities for future work.
Resumo:
Masticatory muscle contraction causes both jaw movement and tissue deformation during function. Natural chewing data from 25 adult miniature pigs were studied by means of time series analysis. The data set included simultaneous recordings of electromyography (EMG) from bilateral masseter (MA), zygomaticomandibularis (ZM) and lateral pterygoid muscles, bone surface strains from the left squamosal bone (SQ), condylar neck (CD) and mandibular corpus (MD), and linear deformation of the capsule of the jaw joint measured bilaterally using differential variable reluctance transducers. Pairwise comparisons were examined by calculating the cross-correlation functions. Jaw-adductor muscle activity of MA and ZM was found to be highly cross-correlated with CD and SQ strains and weakly with MD strain. No muscle’s activity was strongly linked to capsular deformation of the jaw joint, nor were bone strains and capsular deformation tightly linked. Homologous muscle pairs showed the greatest synchronization of signals, but the signals themselves were not significantly more correlated than those of non-homologous muscle pairs. These results suggested that bone strains and capsular deformation are driven by different mechanical regimes. Muscle contraction and ensuing reaction forces are probably responsible for bone strains, whereas capsular deformation is more likely a product of movement.
Resumo:
Factor analysis was used to develop a more detailed description of the human hand to be used in the creation of glove sizes; currently gloves sizes are small, medium, and large. The created glove sizes provide glove designers with the ability to create a glove design that can provide fit to the majority of hand variations in both the male and female populations. The research used the American National Survey (ANSUR) data that was collected in 1988. This data contains eighty-six length, width, height, and circumference measurements of the human hand for one thousand male subjects and thirteen hundred female subjects. Eliminating redundant measurements reduced the data to forty-six essential measurements. Factor analysis grouped the variables to form three factors. The factors were used to generate hand sizes by using percentiles along each factor axis. Two different sizing systems were created. The first system contains 125 sizes for male and female. The second system contains 7 sizes for males and 14 sizes for females. The sizing systems were compared to another hand sizing system that was created using the ANSUR database indicating that the systems created using factor analysis provide better fit.
Resumo:
In west-central Texas, USA, abatement efforts for the gray fox (Urocyon cinereoargenteus) rabies epizootic illustrate the difficulties inherent in large-scale management of wildlife disease. The rabies epizootic has been managed through a cooperative oral rabies vaccination program (ORV) since 1996. Millions of edible baits containing a rabies vaccine have been distributed annually in a 16-km to 24-km zone around the perimeter of the epizootic, which encompasses a geographic area >4 x 105 km2. The ORV program successfully halted expansion of the epizootic into metropolitan areas but has not achieved the ultimate goal of eradication. Rabies activity in gray fox continues to occur periodically outside the ORV zone, preventing ORV zone contraction and dissipation of the epizootic. We employed a landscape-genetic approach to assess gray fox population structure and dispersal in the affected area, with the aim of assisting rabies management efforts. No unique genetic clusters or population boundaries were detected. Instead, foxes were weakly structured over the entire region in an isolation by distance pattern. Local subpopulations appeared to be genetically non-independent over distances >30 km, implying that long-distance movements or dispersal may have been common in the region. We concluded that gray foxes in west-central Texas have a high potential for long-distance rabies virus trafficking. Thus, a 16-km to 24-km ORV zone may be too narrow to contain the fox rabies epizootic. Continued expansion of the ORV zone, although costly, may be critical to the long-term goal of eliminating the Texas fox rabies virus variant from the United States.
Resumo:
Stabilizing human population size and reducing human-caused impacts on the environment are keys to conserving threatened species (TS). Earth's human population is ~ 7 billion and increasing by ~ 76 million per year. This equates to a human birth-death ratio of 2.35 annually. The 2007 Red List prepared by the International Union for Conservation of Nature and Natural Resources (IUCN) categorized 16,306 species of vertebrates, invertebrates, plants, and other organisms (e.g., lichens, algae) as TS. This is ~ 1 percent of the 1,589,161 species described by IUCN or ~ 0.0033 percent of the believed 5,000,000 total species. Of the IUCN’s described species, vertebrates comprised relatively the most TS listings within respective taxonomic categories (5,742 of 59,811), while invertebrates (2,108 of 1,203,175), plants (8,447 of 297,326), and other species (9 of 28,849) accounted for minor class percentages. Conservation economics comprises microeconomic and macroeconomic principles involving interactions among ecological, environmental, and natural resource economics. A sustainable-growth (steady-state) economy has been posited as instrumental to preserving biological diversity and slowing extinctions in the wild, but few nations endorse this approach. Expanding growth principles characterize most nations' economic policies. To date, statutory fine, captive breeding cost, contingent valuation analysis, hedonic pricing, and travel cost methods are used to value TS in economic research and models. Improved valuation methods of TS are needed for benefit-cost analysis (BCA) of conservation plans. This Chapter provides a review and analysis of: (1) the IUCN status of species, (2) economic principles inherent to sustainable versus growth economies, and (3) methodological issues which hinder effective BCAs of TS conservation.
Resumo:
Four of the 12 major Glycine max ancestors of all modern elite U.S.A. soybean cultivars were the grandparents of Harosoy and Clark, so a Harosoy x Clark population would include some of that genetic diversity. A mating of eight Harosoy and eight Clark plants generated eight F1 plants. The eight F1:2 families were advanced via a plant-to-row selfing method to produce 300 F6-derived RILs that were genotyped with 266 SSR, 481 SNP, and 4 classical markers. SNPs were genotyped with the Illumina 1536-SNP assay. Three linkage maps, SSR, SNP, and SSR-SNP, were constructed with a genotyping error of < 1 %. Each map was compared with the published soybean consensus map. The best subset of 94 RILs for a high-resolution framework (joint) map was selected based on the expected bin length statistic computed with MapPop. The QTLs of seven traits measured in a 2-year replicated performance trial of the 300 RILs were identified using composite interval mapping (CIM) and multiple-interval mapping (MIM). QTL x Year effects in multiple trait analysis were compared with results of multiple-interval mapping. QTL x QTL effects were identified in MIM.