898 resultados para Election Counting and Reporting Software,


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Unemployment is related to economic, political and social aspects. One of the least analysed political aspects is the relationship that, from a partisan or ideological perspective, should exist between the election results and the aging level of the voters, which is to be reflected in different electoral costs of unemployment. This chapter updates previous work on the subject, using a spatial econometrics methodology to estimate the relationship between the levels of aging and the election results that were obtained in the most recent elections that took place in Portugal, i.e. the October 2015 legislative elections. The results confirm the hypothesis that the level of unemployment involves a higher (resp. lower) electoral cost the less (resp. more) aged is the electorate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introdução e Objectivos: As organizações necessitam de dados para tomar decisões, necessitam de elementos que sirvam de base à resolução de problemas ou à formulação de um juízo. Necessitam de um índice, de uma informação objectiva, disponível para qualquer indivíduo a manipular de acordo com o tipo de análise que pretende efectuar. Temos cada vez mais de nos preocupar em estar devidamente informados e estabelecer as relações mais vantajosas para a nossa organização e para o mercado onde actuamos. Irá implicar a utilização de distintos modelos de análise e instrumentos de recolha de dados, de forma a criar uma base de informação útil aos vários organismos (municipais e nacionais), que projectam as políticas desportivas a desenvolver a nível local e nacional. Com a presente investigação, pretendemos propor um Modelo de Observatório do Desporto e das Actividades Complementares associado a um Sistema de Informação Desportivo, que permita ao poder local (Autarquias), proceder ao levantamento, sempre que necessário, das evoluções concelhias do mercado desportivo, para que de uma forma sustentada, criem e desenvolvam políticas desportivas locais que melhor se ajustem à sua realidade e que sirvam de base à criação da Conta Satélite do Desporto. Material e Métodos: O modelo será testado no concelho de Odivelas, inserido na estrutura da CM Odivelas, sendo necessário a inquirição dos clubes do concelho (apenas os clubes com actividade desportiva activa), numa 1ª fase, e numa 2ª fase, as empresas que lhe dão suporte. Os presidentes e directores de ambas as organizações serão os alvos da aplicação do questionário informático e/ou questionário em suporte papel, de acordo com as realidades e características de cada clube e empresa. Os dados recolhidos serão tratados tratados através do software da aplicação informática que foi criada especificamente para o efeito e que permitirá a exportação de dados para outros suportes informáticos. A própria aplicação irá permitir a construção e apresentação de relatórios de acordo com os requisitos solicitados. Resultados: O objectivo geral do estudo foi alcansado com a construção de um Modelo de Observatório Municipal do Desporto, baseado numa proposta de Sector do Desporto e de Conta Satélite do Desporto, tendo por base o estudo de caso na Câmara Municipal de Odivelas. Os objectivos específicos prenderam-se com a construção de instrumentos de recolha de dados para: - identificar os serviços desportivos e de complemento desportivo prestados no concelho de Odivelas; - identificar as organizações que prestam serviços desportivos e de Proposta Metodológica de Criação de um Observatório Municipal de Desporto, na Administração Pública Local complemento ao desporto; Os dados recolhidos terão de ser cruzados com a informação já existente, mas que se encontra dispersa pelas várias estruturas da CMO, de forma a permitir realizar o “levantamento” dos valores económicos gerados pelas actividades desportivas dos clubes, colectividades e empresas prestadoras de serviços e identificar as lacunas de serviços necessários aos clubes e colectividades não existentes no concelho.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Libraries since their inception 4000 years ago have been in a process of constant change. Although, changes were in slow motion for centuries, in the last decades, academic libraries have been continuously striving to adapt their services to the ever-changing user needs of students and academic staff. In addition, e-content revolution, technological advances, and ever-shrinking budgets have obliged libraries to efficiently allocate their limited resources among collection and services. Unfortunately, this resource allocation is a complex process due to the diversity of data sources and formats required to be analyzed prior to decision-making, as well as the lack of efficient integration methods. The main purpose of this study is to develop an integrated model that supports libraries in making optimal budgeting and resource allocation decisions among their services and collection by means of a holistic analysis. To this end, a combination of several methodologies and structured approaches is conducted. Firstly, a holistic structure and the required toolset to holistically assess academic libraries are proposed to collect and organize the data from an economic point of view. A four-pronged theoretical framework is used in which the library system and collection are analyzed from the perspective of users and internal stakeholders. The first quadrant corresponds to the internal perspective of the library system that is to analyze the library performance, and costs incurred and resources consumed by library services. The second quadrant evaluates the external perspective of the library system; user’s perception about services quality is judged in this quadrant. The third quadrant analyses the external perspective of the library collection that is to evaluate the impact of the current library collection on its users. Eventually, the fourth quadrant evaluates the internal perspective of the library collection; the usage patterns followed to manipulate the library collection are analyzed. With a complete framework for data collection, these data coming from multiple sources and therefore with different formats, need to be integrated and stored in an adequate scheme for decision support. A data warehousing approach is secondly designed and implemented to integrate, process, and store the holistic-based collected data. Ultimately, strategic data stored in the data warehouse are analyzed and implemented for different purposes including the following: 1) Data visualization and reporting is proposed to allow library managers to publish library indicators in a simple and quick manner by using online reporting tools. 2) Sophisticated data analysis is recommended through the use of data mining tools; three data mining techniques are examined in this research study: regression, clustering and classification. These data mining techniques have been applied to the case study in the following manner: predicting the future investment in library development; finding clusters of users that share common interests and similar profiles, but belong to different faculties; and predicting library factors that affect student academic performance by analyzing possible correlations of library usage and academic performance. 3) Input for optimization models, early experiences of developing an optimal resource allocation model to distribute resources among the different processes of a library system are documented in this study. Specifically, the problem of allocating funds for digital collection among divisions of an academic library is addressed. An optimization model for the problem is defined with the objective of maximizing the usage of the digital collection over-all library divisions subject to a single collection budget. By proposing this holistic approach, the research study contributes to knowledge by providing an integrated solution to assist library managers to make economic decisions based on an “as realistic as possible” perspective of the library situation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Economia, Administração e Contabilidade, Programa de Pós-Graduação em Administração, 2016.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

By employing interpretive policy analysis this thesis aims to assess, measure, and explain policy capacity for government and non-government organizations involved in reclaiming Alberta's oil sands. Using this type of analysis to assess policy capacity is a novel approach for understanding reclamation policy; and therefore, this research will provide a unique contribution to the literature surrounding reclamation policy. The oil sands region in northeast Alberta, Canada is an area of interest for a few reasons; primarily because of the vast reserves of bitumen and the environmental cost associated with developing this resource. An increase in global oil demand has established incentive for industry to seek out and develop new reserves. Alberta's oil sands are one of the largest remaining reserves in the world, and there is significant interest in increasing production in this region. Furthermore, tensions in several oil exporting nations in the Middle East remain unresolved, and this has garnered additional support for a supply side solution to North American oil demands. This solution relies upon the development of reserves in both the United States and Canada. These compounding factors have contributed to the increased development in the oil sands of northeastern Alberta. Essentially, a rapid expansion of oil sands operations is ongoing, and is the source of significant disturbance across the region. This disturbance, and the promises of reclamation, is a source of contentious debates amongst stakeholders and continues to be highly visible in the media. If oil sands operations are to retain their social license to operate, it is critical that reclamation efforts be effective. One concern non-governmental organizations (NGOs) expressed criticizes the current monitoring and enforcement of regulatory programs in the oil sands. Alberta's NGOs have suggested the data made available to them originates from industrial sources, and is generally unchecked by government. In an effort to discern the overall status of reclamation in the oil sands this study explores several factors essential to policy capacity: work environment, training, employee attitudes, perceived capacity, policy tools, evidence based work, and networking. Data was collected through key informant interviews with senior policy professionals in government and non-government agencies in Alberta. The following are agencies of interest in this research: Canadian Association of Petroleum Producers (CAPP); Alberta Environment and Sustainable Resource Development (AESRD); Alberta Energy Regulator (AER); Cumulative Environmental Management Association (CEMA); Alberta Environment Monitoring, Evaluation, and Reporting Agency (AEMERA); Wood Buffalo Environmental Association (WBEA). The aim of this research is to explain how and why reclamation policy is conducted in Alberta's oil sands. This will illuminate government capacity, NGO capacity, and the interaction of these two agency typologies. In addition to answering research questions, another goal of this project is to show interpretive analysis of policy capacity can be used to measure and predict policy effectiveness. The oil sands of Alberta will be the focus of this project, however, future projects could focus on any government policy scenario utilizing evidence-based approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to use Software Defined Radio (SDR) in the civilian mobile applications will make it possible for the next generation of mobile devices to handle multi-standard personal wireless devices and ubiquitous wireless devices. The original military standard created many beneficial characteristics for SDR, but resulted in a number of disadvantages as well. Many challenges in commercializing SDR are still the subject of interest in the software radio research community. Four main issues that have been already addressed are performance, size, weight, and power. This investigation presents an in-depth study of SDR inter-components communications in terms of total link delay related to the number of components and packet sizes in systems based on Software Communication Architecture (SCA). The study is based on the investigation of the controlled environment platform. Results suggest that the total link delay does not linearly increase with the number of components and the packet sizes. The closed form expression of the delay was modeled using a logistic function in terms of the number of components and packet sizes. The model performed well when the number of components was large. Based upon the mobility applications, energy consumption has become one of the most crucial limitations. SDR will not only provide flexibility of multi-protocol support, but this desirable feature will also bring a choice of mobile protocols. Having such a variety of choices available creates a problem in the selection of the most appropriate protocol to transmit. An investigation in a real-time algorithm to optimize energy efficiency was also performed. Communication energy models were used including switching estimation to develop a waveform selection algorithm. Simulations were performed to validate the concept.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Information and communication technologies play an increasingly important role in society, in the sense that all areas and professions make use of digital resources. The school can not be brushed off this reality, aim to create full subjects and integrated in society today. Educational software can be used very early in the education of children, but they must be carefully and monitoring. This article aims to present the results of the use of educational software in English to the awareness of context with children of pre-school education in kindergarten, nursery center Redemptorist Fathers - The smallest fox in White Castle, a 21 group children under 5 years. Early awareness of foreign language such as English can be started with digital multimedia capabilities and various software available on the market. However, the small study described the case reveals some resistance from parents and educators, in the preparation of these to choose and monitor the use of ICT by children, in addition to also highlight the self-interest of the children involved and their learning a few words in English language in different contexts of daily worked. The study opens perspectives on close monitoring needs of such uses and training of educators in the field of use of resources multilingual awareness in pre-school education.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Healthcare systems have assimilated information and communication technologies in order to improve the quality of healthcare and patient's experience at reduced costs. The increasing digitalization of people's health information raises however new threats regarding information security and privacy. Accidental or deliberate data breaches of health data may lead to societal pressures, embarrassment and discrimination. Information security and privacy are paramount to achieve high quality healthcare services, and further, to not harm individuals when providing care. With that in mind, we give special attention to the category of Mobile Health (mHealth) systems. That is, the use of mobile devices (e.g., mobile phones, sensors, PDAs) to support medical and public health. Such systems, have been particularly successful in developing countries, taking advantage of the flourishing mobile market and the need to expand the coverage of primary healthcare programs. Many mHealth initiatives, however, fail to address security and privacy issues. This, coupled with the lack of specific legislation for privacy and data protection in these countries, increases the risk of harm to individuals. The overall objective of this thesis is to enhance knowledge regarding the design of security and privacy technologies for mHealth systems. In particular, we deal with mHealth Data Collection Systems (MDCSs), which consists of mobile devices for collecting and reporting health-related data, replacing paper-based approaches for health surveys and surveillance. This thesis consists of publications contributing to mHealth security and privacy in various ways: with a comprehensive literature review about mHealth in Brazil; with the design of a security framework for MDCSs (SecourHealth); with the design of a MDCS (GeoHealth); with the design of Privacy Impact Assessment template for MDCSs; and with the study of ontology-based obfuscation and anonymisation functions for health data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Many publications report the prevalence of chronic kidney disease (CKD) in the general population. Comparisons across studies are hampered as CKD prevalence estimations are influenced by study population characteristics and laboratory methods. METHODS: For this systematic review, two researchers independently searched PubMed, MEDLINE and EMBASE to identify all original research articles that were published between 1 January 2003 and 1 November 2014 reporting the prevalence of CKD in the European adult general population. Data on study methodology and reporting of CKD prevalence results were independently extracted by two researchers. RESULTS: We identified 82 eligible publications and included 48 publications of individual studies for the data extraction. There was considerable variation in population sample selection. The majority of studies did not report the sampling frame used, and the response ranged from 10 to 87%. With regard to the assessment of kidney function, 67% used a Jaffe assay, whereas 13% used the enzymatic assay for creatinine determination. Isotope dilution mass spectrometry calibration was used in 29%. The CKD-EPI (52%) and MDRD (75%) equations were most often used to estimate glomerular filtration rate (GFR). CKD was defined as estimated GFR (eGFR) <60 mL/min/1.73 m(2) in 92% of studies. Urinary markers of CKD were assessed in 60% of the studies. CKD prevalence was reported by sex and age strata in 54 and 50% of the studies, respectively. In publications with a primary objective of reporting CKD prevalence, 39% reported a 95% confidence interval. CONCLUSIONS: The findings from this systematic review showed considerable variation in methods for sampling the general population and assessment of kidney function across studies reporting CKD prevalence. These results are utilized to provide recommendations to help optimize both the design and the reporting of future CKD prevalence studies, which will enhance comparability of study results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: There is increasing awareness that regardless of the proven value of clinical interventions, the use of effective strategies to implement such interventions into clinical practice is necessary to ensure that patients receive the benefits. However, there is often confusion between what is the clinical intervention and what is the implementation intervention. This may be caused by a lack of conceptual clarity between 'intervention' and 'implementation', yet at other times by ambiguity in application. We suggest that both the scientific and the clinical communities would benefit from greater clarity; therefore, in this paper, we address the concepts of intervention and implementation, primarily as in clinical interventions and implementation interventions, and explore the grey area in between. Discussion: To begin, we consider the similarities, differences and potential greyness between clinical interventions and implementation interventions through an overview of concepts. This is illustrated with reference to two examples of clinical interventions and implementation intervention studies, including the potential ambiguity in between. We then discuss strategies to explore the hybridity of clinical-implementation intervention studies, including the role of theories, frameworks, models, and reporting guidelines that can be applied to help clarify the clinical and implementation intervention, respectively. Conclusion: Semantics provide opportunities for improved precision in depicting what is 'intervention' and what is 'implementation' in health care research. Further, attention to study design, the use of theory, and adoption of reporting guidelines can assist in distinguishing between the clinical intervention and the implementation intervention. However, certain aspects may remain unclear in analyses of hybrid studies of clinical and implementation interventions. Recognizing this potential greyness can inform further discourse.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software is an important infrastructural component of scientific research practice. The work of research often requires scientists to develop, use, and share software in order to address their research questions. This report presents findings from a survey of researchers at the University of Washington in three broad areas: Oceanography, Biology, and Physics. This survey is part of the National Science Foundation funded study Scientists and their Software: A Sociotechnical Investigation of Scientific Software Development and Sharing (ACI-1302272). We inquired about each respondent’s research area and data use along with their use, development, and sharing of software. Finally, we asked about challenges researchers face with and about concerns regarding software’s effect on study replicability. These findings are part of ongoing efforts to develop deeper characterizations of the role of software in twenty-first century scientific research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La noción de organización saludable cada vez toma mayor relevancia en el mundo académico y empresarial, pues se ha demostrado que las organizaciones que crean ambientes saludables y adoptan prácticas saludables, afectan positivamente a sus diferentes grupos de interés, incluidos empleados, socios, proveedores, clientes y la sociedad (Grueso y Rey, 2013). Con el fin de contribuir a lo anteriormente mencionado, a continuación se presenta el avance de una investigación de tipo documental y aplicada, mediante la que se pretende comprender la forma como la adopción de prácticas de responsabilidad social contribuye al bienestar de los empleados en dos compañías del sector comercio al detal en Colombia. Para lograr lo anterior, primero se mostrarán los fundamentos teóricos y antecedentes de estudios e investigaciones realizadas sobre Responsabilidad Social Empresarial y el bienestar de los empleados. Posteriormente se describe la metodología desarrollada para la recolección de la información y el proceso mediante el que esta fue analizada. En la siguiente sección se describen los hallazgos y por último se plantean las conclusiones del estudio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pesticides applications have been described by many researches as a very inefficient process. In some cases, there are reports that only 0.02% of the applied products are used for the effective control of the problem. The main factor that influences pesticides applications is the droplet size formed on spraying nozzles. Many parameters affects the dynamic of the droplets, like wind, temperature, relative humidity, and others. Small droplets are biologically more active, but they are affected by evaporation and drift. On the other hand, the great droplets do not promote a good distribution of the product on the target. In this sense, associated with the risk of non target areas contamination and with the high costs involved in applications, the knowledge of the droplet size is of fundamental importance in the application technology. When sophisticated technology for droplets analysis is unavailable, is common the use of artificial targets like water-sensitive paper to sample droplets. On field sampling, water-sensitive papers are placed on the trials where product will be applied. When droplets impinging on it, the yellow surface of this paper will be stained dark blue, making easy their recognition. Collected droplets on this papers have different kinds of sizes. In this sense, the determination of the droplet size distribution gives a mass distribution of the material and so, the efficience of the application of the product. The stains produced by droplets shows a spread factor proportional to their respectives initial sizes. One of methodologies to analyse the droplets is a counting and measure of the droplets made in microscope. The Porton N-G12 graticule, that shows equaly spaces class intervals on geometric progression of square 2, are coulpled to the lens of the microscope. The droplet size parameters frequently used are the Volumetric Median Diameter (VMD) and the Numeric Median Diameter. On VMD value, a representative droplets sample is divided in two equal parts of volume, in such away one part contains droplets of sizes smaller than VMD and the other part contains droplets of sizes greater that VMD. The same process is done to obtaining the NMD, which divide the sample in two equal parts in relation to the droplets size. The ratio between VMD and NMD allows the droplets uniformity evaluation. After that, the graphics of accumulated probability of the volume and size droplets are plotted on log scale paper (accumulated probability versus median diameter of each size class). The graphics provides the NMD on the x-axes point corresponding to the value of 50% founded on the y-axes. All this process is very slow and subjected to operator error. So, in order to decrease the difficulty envolved with droplets measuring it was developed a numeric model, implemented on easy and accessfull computational language, which allows approximate VMD and NMD values, with good precision. The inputs to this model are the frequences of the droplets sizes colected on the water-sensitive paper, observed on the Porton N-G12 graticule fitted on microscope. With these data, the accumulated distribution of the droplet medium volumes and sizes are evaluated. The graphics obtained by plotting this distributions allow to obtain the VMD and NMD using linear interpolation, seen that on the middle of the distributions the shape of the curves are linear. These values are essential to evaluate the uniformity of droplets and to estimate the volume deposited on the observed paper by the density (droplets/cm2). This methodology to estimate the droplets volume was developed by 11.0.94.224 Project of the CNPMA/EMBRAPA. Observed data of herbicides aerial spraying samples, realized by Project on Pelotas/RS county, were used to compare values obtained manual graphic method and with those obtained by model has shown, with great precision, the values of VMD and NMD on each sampled collector, allowing to estimate a quantities of deposited product and, by consequence, the quantities losses by drifty. The graphics of variability of VMD and NMD showed that the quantity of droplets that reachs the collectors had a short dispersion, while the deposited volume shows a great interval of variation, probably because the strong action of air turbulence on the droplets distribution, enfasizing the necessity of a deeper study to verify this influences on drift.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.