935 resultados para accessibility analysis tools


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the 1990s, scholars have paid special attention to public management’s role in theory and research under the assumption that effective management is one of the primary means for achieving superior performance. To some extent, this was influenced by popular business writings of the 1980s as well as the reinventing literature of the 1990s. A number of case studies but limited quantitative research papers have been published showing that management matters in the performance of public organizations. ^ My study examined whether or not management capacity increased organizational performance using quantitative techniques. The specific research problem analyzed was whether significant differences existed between high and average performing public housing agencies on select criteria identified in the Government Performance Project (GPP) management capacity model, and whether this model could predict outcome performance measures in a statistically significant manner, while controlling for exogenous influences. My model included two of four GPP management subsystems (human resources and information technology), integration and alignment of subsystems, and an overall managing for results framework. It also included environmental and client control variables that were hypothesized to affect performance independent of management action. ^ Descriptive results of survey responses showed high performing agencies with better scores on most high performance dimensions of individual criteria, suggesting support for the model; however, quantitative analysis found limited statistically significant differences between high and average performers and limited predictive power of the model. My analysis led to the following major conclusions: past performance was the strongest predictor of present performance; high unionization hurt performance; and budget related criterion mattered more for high performance than other model factors. As to the specific research question, management capacity may be necessary but it is not sufficient to increase performance. ^ The research suggested managers may benefit by implementing best practices identified through the GPP model. The usefulness of the model could be improved by adding direct service delivery to the model, which may also improve its predictive power. Finally, there are abundant tested concepts and tools designed to improve system performance that are available for practitioners designed to improve management subsystem support of direct service delivery.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stable isotope analysis has emerged as one of the primary means for examining the structure and dynamics of food webs, and numerous analytical approaches are now commonly used in the field. Techniques range from simple, qualitative inferences based on the isotopic niche, to Bayesian mixing models that can be used to characterize food-web structure at multiple hierarchical levels. We provide a comprehensive review of these techniques, and thus a single reference source to help identify the most useful approaches to apply to a given data set. We structure the review around four general questions: (1) what is the trophic position of an organism in a food web?; (2) which resource pools support consumers?; (3) what additional information does relative position of consumers in isotopic space reveal about food-web structure?; and (4) what is the degree of trophic variability at the intrapopulation level? For each general question, we detail different approaches that have been applied, discussing the strengths and weaknesses of each. We conclude with a set of suggestions that transcend individual analytical approaches, and provide guidance for future applications in the field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A difficult transition to a new paradigm of Democratic Security and the subsequent process of military restructuring during the nineties led El Salvador, Honduras, Guatemala and Nicaragua to re-consider their old structures and functions of their armed forces and police agencies. This study compares the institutions in the four countries mentioned above to assess their current condition and response capacity in view of the contemporary security challenges in Central America. This report reveals that the original intention of limiting armies to defend and protect borders has been threatened by the increasing participation of armies in public security. While the strength of armies has been consolidated in terms of numbers, air and naval forces have failed to become strengthened or sufficiently developed to effectively combat organized crime and drug trafficking and are barely able to conduct air and sea operations. Honduras has been the only country that has maintained a proportional distribution of its armed forces. However, security has been in the hands of a Judicial Police, supervised by the Public Ministry. The Honduran Judicial Police has been limited to exercising preventive police duties, prohibited from carrying out criminal investigations. Nicaragua, meanwhile, possesses a successful police force, socially recognized for maintaining satisfactory levels of security surpassing the Guatemalan and El Salvadoran police, which have not achieved similar results despite of having set up a civilian police force separate from the military. El Salvador meanwhile, has excelled in promoting a Police Academy and career professional education, even while not having military attachés in other countries. Regarding budgetary issues, the four countries allocate almost twice the amount of funding on their security budgets in comparison to what is allocated to their defense budgets. However, spending in both areas is low when taking into account each country's GDP as well as their high crime rates. Regional security challenges must be accompanied by a professionalization of the regional armies focused on protecting and defending borders. Therefore, strong institutional frameworks to support the fight against crime and drug trafficking are required. It will require the strengthening of customs, greater control of illicit arms trafficking, investment in education initiatives, creating employment opportunities and facilitating significant improvements in the judicial system, as well as its accessibility to the average citizen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the article - Menu Analysis: Review and Evaluation - by Lendal H. Kotschevar, Distinguished Professor School of Hospitality Management, Florida International University, Kotschevar’s initial statement reads: “Various methods are used to evaluate menus. Some have quite different approaches and give different information. Even those using quite similar methods vary in the information they give. The author attempts to describe the most frequently used methods and to indicate their value. A correlation calculation is made to see how well certain of these methods agree in the information they give.” There is more than one way to look at the word menu. The culinary selections decided upon by the head chef or owner of a restaurant, which ultimately define the type of restaurant is one way. The physical outline of the food, which a patron actually holds in his or her hand, is another. These descriptions are most common to the word, menu. The author primarily concentrates on the latter description, and uses the act of counting the number of items sold on a menu to measure the popularity of any particular item. This, along with a formula, allows Kotschevar to arrive at a specific value per item. Menu analysis would appear a difficult subject to broach. How does a person approach a menu analysis, how do you qualify and quantify a menu; it seems such a subjective exercise. The author offers methods and outlines on approaching menu analysis from empirical perspectives. “Menus are often examined visually through the evaluation of various factors. It is a subjective method but has the advantage of allowing scrutiny of a wide range of factors which other methods do not,” says Distinguished Professor, Kotschevar. “The method is also highly flexible. Factors can be given a score value and scores summed to give a total for a menu. This allows comparison between menus. If the one making the evaluations knows menu values, it is a good method of judgment,” he further offers. The author wants you to know that assigning values is fundamental to a pragmatic menu analysis; it is how the reviewer keeps score, so to speak. Value merit provides reliable criteria from which to gauge a particular menu item. In the final analysis, menu evaluation provides the mechanism for either keeping or rejecting selected items on a menu. Kotschevar provides at least three different matrix evaluation methods; they are defined as the Miller method, the Smith and Kasavana method, and the Pavesic method. He offers illustrated examples of each via a table format. These are helpful tools since trying to explain the theories behind the tables would be difficult at best. Kotschevar also references examples of analysis methods which aren’t matrix based. The Hayes and Huffman - Goal Value Analysis - is one such method. The author sees no one method better than another, and suggests that combining two or more of the methods to be a benefit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study we have identified key genes that are critical in development of astrocytic tumors. Meta-analysis of microarray studies which compared normal tissue to astrocytoma revealed a set of 646 differentially expressed genes in the majority of astrocytoma. Reverse engineering of these 646 genes using Bayesian network analysis produced a gene network for each grade of astrocytoma (Grade I–IV), and ‘key genes’ within each grade were identified. Genes found to be most influential to development of the highest grade of astrocytoma, Glioblastoma multiforme were: COL4A1, EGFR, BTF3, MPP2, RAB31, CDK4, CD99, ANXA2, TOP2A, and SERBP1. All of these genes were up-regulated, except MPP2 (down regulated). These 10 genes were able to predict tumor status with 96–100% confidence when using logistic regression, cross validation, and the support vector machine analysis. Markov genes interact with NFkβ, ERK, MAPK, VEGF, growth hormone and collagen to produce a network whose top biological functions are cancer, neurological disease, and cellular movement. Three of the 10 genes - EGFR, COL4A1, and CDK4, in particular, seemed to be potential ‘hubs of activity’. Modified expression of these 10 Markov Blanket genes increases lifetime risk of developing glioblastoma compared to the normal population. The glioblastoma risk estimates were dramatically increased with joint effects of 4 or more than 4 Markov Blanket genes. Joint interaction effects of 4, 5, 6, 7, 8, 9 or 10 Markov Blanket genes produced 9, 13, 20.9, 26.7, 52.8, 53.2, 78.1 or 85.9%, respectively, increase in lifetime risk of developing glioblastoma compared to normal population. In summary, it appears that modified expression of several ‘key genes’ may be required for the development of glioblastoma. Further studies are needed to validate these ‘key genes’ as useful tools for early detection and novel therapeutic options for these tumors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To promote regional or mutual improvement, numerous interjurisdictional efforts to share tax bases have been attempted. Most of these efforts fail to be consummated. Motivations to share revenues include: narrowing fiscal disparities, enhancing regional cooperation and economic development, rationalizing land-use, and minimizing revenue losses caused by competition to attract and keep businesses. Various researchers have developed theories to aid understanding of why interjurisdictional cooperation efforts succeed or fail. Walter Rosenbaum and Gladys Kammerer studied two contemporaneous Florida local-government consolidation attempts. Boyd Messinger subsequently tested their Theory of Successful Consolidation on nine consolidation attempts. Paul Peterson's dual theories on Modern Federalism posit that all governmental levels attempt to further economic development and that politicians act in ways that either further their futures or cement job security. Actions related to the latter theory often interfere with the former. Samuel Nunn and Mark Rosentraub sought to learn how interjurisdictional cooperation evolves. Through multiple case studies they developed a model framing interjurisdictional cooperation in four dimensions. ^ This dissertation investigates the ability of the above theories to help predict success or failure of regional tax-base revenue sharing attempts. A research plan was formed that used five sequenced steps to gather data, analyze it, and conclude if hypotheses concerning the application of these theories were valid. The primary analytical tools were: multiple case studies, cross-case analysis, and pattern matching. Data was gathered from historical records, questionnaires, and interviews. ^ The results of this research indicate that Rosenbaum-Kammerer theory can be a predictor of success or failure in implementing tax-base revenue sharing if it is amended as suggested by Messinger and further modified by a recommendation in this dissertation. Peterson's Functional and Legislative theories considered together were able to predict revenue sharing proposal outcomes. Many of the indicators of interjurisdictional cooperation forwarded in the Nunn-Rosentraub model appeared in the cases studied, but the model was not a reliable forecasting instrument. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary goal of this dissertation is the study of patterns of viral evolution inferred from serially-sampled sequence data, i.e., sequence data obtained from strains isolated at consecutive time points from a single patient or host. RNA viral populations have an extremely high genetic variability, largely due to their astronomical population sizes within host systems, high replication rate, and short generation time. It is this aspect of their evolution that demands special attention and a different approach when studying the evolutionary relationships of serially-sampled sequence data. New methods that analyze serially-sampled data were developed shortly after a groundbreaking HIV-1 study of several patients from which viruses were isolated at recurring intervals over a period of 10 or more years. These methods assume a tree-like evolutionary model, while many RNA viruses have the capacity to exchange genetic material with one another using a process called recombination. ^ A genealogy involving recombination is best described by a network structure. A more general approach was implemented in a new computational tool, Sliding MinPD, one that is mindful of the sampling times of the input sequences and that reconstructs the viral evolutionary relationships in the form of a network structure with implicit representations of recombination events. The underlying network organization reveals unique patterns of viral evolution and could help explain the emergence of disease-associated mutants and drug-resistant strains, with implications for patient prognosis and treatment strategies. In order to comprehensively test the developed methods and to carry out comparison studies with other methods, synthetic data sets are critical. Therefore, appropriate sequence generators were also developed to simulate the evolution of serially-sampled recombinant viruses, new and more through evaluation criteria for recombination detection methods were established, and three major comparison studies were performed. The newly developed tools were also applied to "real" HIV-1 sequence data and it was shown that the results represented within an evolutionary network structure can be interpreted in biologically meaningful ways. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protecting confidential information from improper disclosure is a fundamental security goal. While encryption and access control are important tools for ensuring confidentiality, they cannot prevent an authorized system from leaking confidential information to its publicly observable outputs, whether inadvertently or maliciously. Hence, secure information flow aims to provide end-to-end control of information flow. Unfortunately, the traditionally-adopted policy of noninterference, which forbids all improper leakage, is often too restrictive. Theories of quantitative information flow address this issue by quantifying the amount of confidential information leaked by a system, with the goal of showing that it is intuitively "small" enough to be tolerated. Given such a theory, it is crucial to develop automated techniques for calculating the leakage in a system. ^ This dissertation is concerned with program analysis for calculating the maximum leakage, or capacity, of confidential information in the context of deterministic systems and under three proposed entropy measures of information leakage: Shannon entropy leakage, min-entropy leakage, and g-leakage. In this context, it turns out that calculating the maximum leakage of a program reduces to counting the number of possible outputs that it can produce. ^ The new approach introduced in this dissertation is to determine two-bit patterns, the relationships among pairs of bits in the output; for instance we might determine that two bits must be unequal. By counting the number of solutions to the two-bit patterns, we obtain an upper bound on the number of possible outputs. Hence, the maximum leakage can be bounded. We first describe a straightforward computation of the two-bit patterns using an automated prover. We then show a more efficient implementation that uses an implication graph to represent the two- bit patterns. It efficiently constructs the graph through the use of an automated prover, random executions, STP counterexamples, and deductive closure. The effectiveness of our techniques, both in terms of efficiency and accuracy, is shown through a number of case studies found in recent literature. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. ^ There are two issues in using HLPNs—modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. ^ For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. ^ For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. ^ The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, large numbers of social media services have emerged and been widely used in people's daily life as important information sharing and acquisition tools. With a substantial amount of user-contributed text data on social media, it becomes a necessity to develop methods and tools for text analysis for this emerging data, in order to better utilize it to deliver meaningful information to users. ^ Previous work on text analytics in last several decades is mainly focused on traditional types of text like emails, news and academic literatures, and several critical issues to text data on social media have not been well explored: 1) how to detect sentiment from text on social media; 2) how to make use of social media's real-time nature; 3) how to address information overload for flexible information needs. ^ In this dissertation, we focus on these three problems. First, to detect sentiment of text on social media, we propose a non-negative matrix tri-factorization (tri-NMF) based dual active supervision method to minimize human labeling efforts for the new type of data. Second, to make use of social media's real-time nature, we propose approaches to detect events from text streams on social media. Third, to address information overload for flexible information needs, we propose two summarization framework, dominating set based summarization framework and learning-to-rank based summarization framework. The dominating set based summarization framework can be applied for different types of summarization problems, while the learning-to-rank based summarization framework helps utilize the existing training data to guild the new summarization tasks. In addition, we integrate these techneques in an application study of event summarization for sports games as an example of how to better utilize social media data. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Greater inclusion of individuals with disabilities into mainstream society is an important goal for society. One of the best ways to include individuals is to actively promote and encourage their participation in the labor force. Of all disabilities, it is feasible to assume that individual with spinal cord injuries can be among the most easily mainstreamed into the labor force. However, less that fifty percent of individuals with spinal cord injuries work. This study focuses on how disability benefit programs, such as Social Security Disability Insurance, and Worker's Compensation, the Americans with Disabilities Act and rehabilitation programs affect employment decisions. The questions were modeled using utility theory with an augmented expenditure function and indifference theory. Statically, Probit, Logit, predicted probability, and linear regressions were used to analyze these questions. Statistical analysis was done on the probability of working, ever attempting to work after injury, and on the number of years after injury that work was first attempted and the number of hours worked per week. The data utilized were from the National Spinal Cord Injury Database and the Spinal Cord Injuries and Labor Database. The Spinal Cord Injuries and Labor Database was created specifically for this study by the author. Receiving disability benefits decreased the probability of working, of ever attempting to work, increased the number of years after injury before the first work attempt was made, and decreased the number of hours worked per week for those individuals working. These results were all statistically significant. The Americans with Disabilities Act decrease the number of years before an individual made a work attempt. The decrease is statistically significant. The amount of rehabilitation had a significant positive effect for male individuals with low paraplegia, and significant negative effect for individuals with high tetraplegia. For women, there were significant negative effects for high tetraplegia and high paraplegia. This study finds that the financial disincentives of receiving benefits are the major determinants of whether an individual with a spinal cord injury returns to the labor force. Policies are recommended that would decrease the disincentive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, large numbers of social media services have emerged and been widely used in people's daily life as important information sharing and acquisition tools. With a substantial amount of user-contributed text data on social media, it becomes a necessity to develop methods and tools for text analysis for this emerging data, in order to better utilize it to deliver meaningful information to users. Previous work on text analytics in last several decades is mainly focused on traditional types of text like emails, news and academic literatures, and several critical issues to text data on social media have not been well explored: 1) how to detect sentiment from text on social media; 2) how to make use of social media's real-time nature; 3) how to address information overload for flexible information needs. In this dissertation, we focus on these three problems. First, to detect sentiment of text on social media, we propose a non-negative matrix tri-factorization (tri-NMF) based dual active supervision method to minimize human labeling efforts for the new type of data. Second, to make use of social media's real-time nature, we propose approaches to detect events from text streams on social media. Third, to address information overload for flexible information needs, we propose two summarization framework, dominating set based summarization framework and learning-to-rank based summarization framework. The dominating set based summarization framework can be applied for different types of summarization problems, while the learning-to-rank based summarization framework helps utilize the existing training data to guild the new summarization tasks. In addition, we integrate these techneques in an application study of event summarization for sports games as an example of how to better utilize social media data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Finite-Differences Time-Domain (FDTD) algorithms are well established tools of computational electromagnetism. Because of their practical implementation as computer codes, they are affected by many numerical artefact and noise. In order to obtain better results we propose using Principal Component Analysis (PCA) based on multivariate statistical techniques. The PCA has been successfully used for the analysis of noise and spatial temporal structure in a sequence of images. It allows a straightforward discrimination between the numerical noise and the actual electromagnetic variables, and the quantitative estimation of their respective contributions. Besides, The GDTD results can be filtered to clean the effect of the noise. In this contribution we will show how the method can be applied to several FDTD simulations: the propagation of a pulse in vacuum, the analysis of two-dimensional photonic crystals. In this last case, PCA has revealed hidden electromagnetic structures related to actual modes of the photonic crystal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bifidobacteria are claimed to contribute positively to human health through a range of beneficial or probiotic activities, including amelioration of gastrointestinal and metabolic disorders, and therefore this particular group of gastrointestinal commensals has enjoyed increasing industrial and scientific attention in recent years. However, the molecular mechanisms underlying these probiotic mechanisms are still largely unknown, mainly due to the fact that molecular tools for bifidobacteria are rather poorly developed, with many strains lacking genetic accessibility. In this work, we describe the generation of transposon insertion mutants in two bifidobacterial strains, B. breve UCC2003 and B. breve NCFB2258. We also report the creation of the first transposon mutant library in a bifidobacterial strain, employing B. breve UCC2003 and a Tn5-based transposome strategy. The library was found to be composed of clones containing single transposon insertions which appear to be randomly distributed along the genome. The usefulness of the library to perform phenotypic screenings was confirmed through identification and analysis of mutants defective in D-galactose, D-lactose or pullulan utilization abilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Technology provides a range of tools which facilitate parts of the process of reading, analysis and writing in humanities, but these tools are limited and poorly integrated. Methods of providing students with the skills to make good use of a range of tools to create an integrated, structured process of writing in the disciplines are examined, compared and critiqued. Tools for mindmapping and outlining are examined both as reading tools and as tools to structure knowledge and explore ontology creation. Interoperability between these and common wordprocessors is examined in order to explore how students may be taught to develop a structured research and writing process using currently available tools. Requirements for future writing tools are suggested