883 resultados para 10 Note Management Program


Relevância:

40.00% 40.00%

Publicador:

Resumo:

General note: Title and date provided by Bettye Lane.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Modern software applications are becoming more dependent on database management systems (DBMSs). DBMSs are usually used as black boxes by software developers. For example, Object-Relational Mapping (ORM) is one of the most popular database abstraction approaches that developers use nowadays. Using ORM, objects in Object-Oriented languages are mapped to records in the database, and object manipulations are automatically translated to SQL queries. As a result of such conceptual abstraction, developers do not need deep knowledge of databases; however, all too often this abstraction leads to inefficient and incorrect database access code. Thus, this thesis proposes a series of approaches to improve the performance of database-centric software applications that are implemented using ORM. Our approaches focus on troubleshooting and detecting inefficient (i.e., performance problems) database accesses in the source code, and we rank the detected problems based on their severity. We first conduct an empirical study on the maintenance of ORM code in both open source and industrial applications. We find that ORM performance-related configurations are rarely tuned in practice, and there is a need for tools that can help improve/tune the performance of ORM-based applications. Thus, we propose approaches along two dimensions to help developers improve the performance of ORM-based applications: 1) helping developers write more performant ORM code; and 2) helping developers configure ORM configurations. To provide tooling support to developers, we first propose static analysis approaches to detect performance anti-patterns in the source code. We automatically rank the detected anti-pattern instances according to their performance impacts. Our study finds that by resolving the detected anti-patterns, the application performance can be improved by 34% on average. We then discuss our experience and lessons learned when integrating our anti-pattern detection tool into industrial practice. We hope our experience can help improve the industrial adoption of future research tools. However, as static analysis approaches are prone to false positives and lack runtime information, we also propose dynamic analysis approaches to further help developers improve the performance of their database access code. We propose automated approaches to detect redundant data access anti-patterns in the database access code, and our study finds that resolving such redundant data access anti-patterns can improve application performance by an average of 17%. Finally, we propose an automated approach to tune performance-related ORM configurations using both static and dynamic analysis. Our study shows that our approach can help improve application throughput by 27--138%. Through our case studies on real-world applications, we show that all of our proposed approaches can provide valuable support to developers and help improve application performance significantly.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: Although most gastrointestinal stromal tumours (GIST) carry oncogenic mutations in KIT exons 9, 11, 13 and 17, or in platelet-derived growth factor receptor alpha (PDGFRA) exons 12, 14 and 18, around 10% of GIST are free of these mutations. Genotyping and accurate detection of KIT/PDGFRA mutations in GIST are becoming increasingly useful for clinicians in the management of the disease. METHOD: To evaluate and improve laboratory practice in GIST mutation detection, we developed a mutational screening quality control program. Eleven laboratories were enrolled in this program and 50 DNA samples were analysed, each of them by four different laboratories, giving 200 mutational reports. RESULTS: In total, eight mutations were not detected by at least one laboratory. One false positive result was reported in one sample. Thus, the mean global rate of error with clinical implication based on 200 reports was 4.5%. Concerning specific polymorphisms detection, the rate varied from 0 to 100%, depending on the laboratory. The way mutations were reported was very heterogeneous, and some errors were detected. CONCLUSION: This study demonstrated that such a program was necessary for laboratories to improve the quality of the analysis, because an error rate of 4.5% may have clinical consequences for the patient.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Gli ultimi 10 anni hanno visto un crescente aumento delle richieste di fornitura di servizi legati alla manutenzione edilizia da parte della Grande Distribuzione Organizzata; la domanda è quella di servizi riconducibili al Facility Management, ovvero rapporti basati sul raggiungimento di standard qualitativi predefiniti in sede contrattuale e garanzia di intervento 24h/24. Nella prima parte del progetto di tesi viene inquadrata la disciplina del FM, le motivazioni, gli strumenti e gli attori coinvolti. Dopo un excursus normativo sulla manutenzione in Italia, una classificazione delle tipologie di intervento manutentivo e una valutazione sull’incidenza della manutenzione nel Life Cycle Cost, viene effettuata un’analisi delle modalità interoperative del FM applicato alla manutenzione edilizia nel caso della GDO. La tesi è stata svolta nell'ambito di un tirocinio in azienda, il che ha permesso alla laureanda di affrontare il caso di studio di un contratto di Global Service con un’importante catena di grande distribuzione, e di utilizzare un software gestionale (PlaNet) con il quale viene tenuta traccia, per ogni punto vendita, degli interventi manutentivi e della loro localizzazione nell’edificio. Questo permette di avere un quadro completo degli interventi, con modalità di attuazione già note, e garantisce una gestione più efficace delle chiamate, seguite tramite un modulo di Call Center integrato. La tesi esamina criticamente i principali documenti di riferimento per l’opera collegati alla manutenzione: il Piano di Manutenzione e il Fascicolo dell’Opera, evidenziando i limiti legati alla non completezza delle informazioni fornite. L’obbiettivo finale della tesi è quello di proporre un documento integrativo tra il Piano di Manutenzione e il Fascicolo, al fine di snellire il flusso informativo e creare un documento di riferimento completo ed esaustivo, che integra sia gli aspetti tecnici delle modalità manutentive, sia le prescrizioni sulla sicurezza.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The distance learning program "School Management" supports decision makers at the school and ministerial levels in the shaping of formal and informal learning processes at different levels in schools and curricula in Eritrea. This paper examines how the distance learning program is interconnected to educational system development. (DIPF/Orig.)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The exponential increase in clinical research has profoundly changed medical sciences. Evidence that has accumulated in the past three decades from clinical trials has led to the proposal that clinical care should not be based solely on clinical expertise and patient values, and should integrate robust data from systematic research. As a consequence, clinical research has become more complex and methods have become more rigorous, and evidence is usually not easily translated into clinical practice. Therefore, the instruction of clinical research methods for scientists and clinicians must adapt to this new reality. To address this challenge, a global distance-learning clinical research-training program was developed, based on collaborative learning, the pedagogical goal of which was to develop critical thinking skills in clinical research. We describe and analyze the challenges and possible solutions of this course after 5 years of experience (2008-2012) with this program. Through evaluation by students and faculty, we identified and reviewed the following challenges of our program: 1) student engagement and motivation, 2) impact of heterogeneous audience on learning, 3) learning in large groups, 4) enhancing group learning, 5) enhancing social presence, 6) dropouts, 7) quality control, and 8) course management. We discuss these issues and potential alternatives with regard to our research and background.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Information Technology (IT) can be an important component for innovation since enabling e-learning it can provide conditions to which the organization can work with new business and improved processes. In this regard, the Learning Management Systems (LMS) allows communication and interaction between teachers and students in virtual spaces. However the literature indicates that there are gaps in the researches, especially concerning the use of IT for the management of e-learning. The purpose of this paper is to analyze the available literature about the application of LMS for the e-learning management, seeking to present possibilities for researches in the field. An integrative literature review was performed considering the Web of Science, Scopus, Ebsco and Scielo databases, where 78 references were found, of which 25 were full papers. This analysis derives interesting characteristics from scientific studies, highlighting gaps and guidelines for future research.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Organic agriculture represents one of the fastest growing segments of U.S. agriculture (USDA-RMA). With this in mind, the USDA’s Risk Management Agency (RMA) continues to expand crop insurance options for organic growers. In 2016 and 2017, organic producers in Maryland will see additional crops with organic crop insurance options. Increasing crop insurance options will allow this segment of producers new opportunities to manage their risks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Part 19: Knowledge Management in Networks

Relevância:

40.00% 40.00%

Publicador:

Resumo:

O monitoramento da diversidade genética é fundamental em um programa de repovoamento. Avaliouse a diversidade genética de pacu Piaractus mesopotamicus (Holmberg, 1887) em duas estações de piscicultura em Andirá -Paraná, Brasil, utilizadas no programa de repovoamento do Rio Paranapanema. Foram amplificados seis loci microssatélite para avaliar 60 amostras de nadadeira. O estoque de reprodutores B apresentou maior número de alelos e heterozigose (alelos: 22 e H O: 0,628) que o estoque de reprodutores A (alelos: 21 e H O: 0,600). Alelos com baixos níveis de frequência foram observados nos dois estoques. Os coeficientes positivos de endogamia no locus Pme2 (estoque A: F IS = 0,30 e estoque B: F IS = 0,20), Pme5 (estoque B: F IS = 0,15), Pme14 (estoque A: F IS = 0,07) e Pme28 (estoque A: F IS = 0,24 e estoque B: F IS = 0,20), indicaram deficiência de heterozigotos. Foi detectada a presença de um alelo nulo no lócus Pme2. As estimativas negativas nos loci Pme4 (estoque A: F IS = -0,43 e estoque B: F IS= -0,37), Pme5 (estoque A: F IS = - 0,11), Pme14 (estoque B: F IS = - 0,15) e Pme32 (estoque A: F IS = - 0,93 e estoque B: F IS = - 0,60) foram indicativas de excesso de heterozigotos. Foi evidenciado desequilíbrio de ligação e riqueza alélica baixa só no estoque A. A diversidade genética de Nei foi alta nos dois estoques. A distância (0,085) e identidade (0,918) genética mostraram similaridade entre os estoques, o qual reflete uma possível origem comum. 6,05% da variância genética total foi devida a diferenças entre os estoques. Foi observado um recente efeito gargalo nos dois estoques. Os resultados indicaram uma alta diversidade genética nos estoques de reprodutores e baixa diferenciação genética entre eles, o que foi causado pelo manejo reprodutivo das pisciculturas, redução do tamanho populacional e intercâmbio genético entre as pisciculturas.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thesis (Ph.D, Computing) -- Queen's University, 2016-09-30 09:55:51.506

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Shared management between businesses, governments and society in the historic town of São Paulo city began concurrently with the growth of nonprofit organizations (NGOs) in the 1990s. The program Ações Locais, coordinated by the NGO Associação Viva o Centro is housed in this context and its mission is to bring together individuals, businesses and local governments for economic, social and political development as to build up the citizenship in that area. This study provides a historical background on formation of Brazilian citizenship and, from that reference, analyzes the performing citizenship in the program Ações Locais. The main conclusion of the analysis identified that the program have been consolidated, despite the enormous quotidian difficulties, especially in the social inclusion actions for the poor. The dilemma about how bring in the excluded segments of the population may indicate a new field of research and future studies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As the Housing Credit Agency responsible for allocating Tax Credits in the State of Iowa, IFA must adopt a written Qualified Allocation Plan (QAP). The purpose of the QAP is to set forth the criteria that IFA will use in evaluating and monitoring Projects submitted to it by the Developer/Ownership Entity for consideration in making an allocation of Tax Credits. The Governor must approve the QAP after the public has had the opportunity to comment through a public hearing.