975 resultados para Data Center, Software Defined Networking, SDN


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advanced Building Energy Data Visualization is a way to detect performance problems in commercialbuildings. By placing sensors in a building that collects data from example, air temperature and electricalpower, then makes it possible to calculate the data in Data Visualization software. This softwaregenerates visual diagrams so the building manager or building operator can see if for example thepower consumption is to high.A first step (before sensors are installed in a building) to see how the energy consumption is in abuilding can be to use a Benchmarking Tool. There is a number of Benchmarking Tools that is availablefor free on the Internet. Each tool have a bit different approach, but they all show how much energyconsumption there is in a building compared to other similar buildings.In this study a new web design for the benchmarking tool CalARCH has been developed. CalARCHis developed at the Berkeley Lab in Berkeley, California, USA. CalARCH uses data collected only frombuildings in California, and is only for comparing buildings in California with other similar buildingsin the state.Five different versions of the web site were made. Then a web survey was done to determine whichversion would be the best for CalARCH. The results showed that Version 5 and Version 3 was the best.Then a new version was made, based on these two versions. This study was made at the LawrenceBerkeley Laboratory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Equipped with recent advances in electronics and communication, wireless sensor networks gained a rapid development to provide reliable information with higher Quality of Service (QoS) at lower costs. This paper presents a realtime tracking system developed as a part of the ISSNIP BigNet Testbed project. Here a GPS receiver was used to acquire position information of mobile nodes and GSM technology was used as the data communication media. Moreover, Google map based data visualization software was developed to locate the mobile nodes via Internet. This system can be used to accommodate various sensors, such as temperature, pressure, pH etc., and monitor the status of the nodes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This exploratory study analysed the Threshold Learning Outcomes ("TLOs") specified in the Bachelor of Laws Learning and Teaching Academic Standards Statement December 2010, and the Competency Standards for Entry-Level Lawyers for Practical Legal Training, as updated by the Australasian Professional Legal Education Council and Law Admissions Consultative Committee in February 2002 ("NCS"). The qualitative analysis was undertaken using the NVivo computer assisted qualitative data analysis software ("CAQDAS"), to investigate how skills were categorised and defined in each of the documents. The results were then analysed to compare the respective categorisation and definition of skills, and to point to potential complements, overlaps, conflicts, gaps, or blind spots, between the TLOs and the NCS. The findings, and the methodology adopted, might provide insights for future instructional design, content, and delivery of Practical Legal Training programs, and for future reviews of the TLOs and NCS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The calculation of the first few moments of elution peaks is necessary to determine: the amount of component in the sample (peak area or zeroth moment), the retention factor (first moment), and the column efficiency (second moment). It is a time consuming and tedious task for the analyst to perform these calculations, thus data analysis is generally completed by data stations associated to modern chromatographs. However, data acquisition software is a black box which provides no information to chromatographers on how their data are treated. These results are too important to be accepted on blind faith. The location of the peak integration boundaries is most important. In this manuscript, we explore the relationships between the size of the integration area, the relative position of the peak maximum within this area, and the accuracy of the calculated moments. We found that relationships between these parameters do exist and that computers can be programmed with relatively simple routines to automatize the extraction of key peak parameters and to select acceptable integration boundaries. It was also found that the most accurate results are obtained when the S/N exceeds 200.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Privacy preserving on data mining and data release has attracted an increasing research interest over a number of decades. Differential privacy is one influential privacy notion that offers a rigorous and provable privacy guarantee for data mining and data release. Existing studies on differential privacy assume that in a data set, records are sampled independently. However, in real-world applications, records in a data set are rarely independent. The relationships among records are referred to as correlated information and the data set is defined as correlated data set. A differential privacy technique performed on a correlated data set will disclose more information than expected, and this is a serious privacy violation. Although recent research was concerned with this new privacy violation, it still calls for a solid solution for the correlated data set. Moreover, how to decrease the large amount of noise incurred via differential privacy in correlated data set is yet to be explored. To fill the gap, this paper proposes an effective correlated differential privacy solution by defining the correlated sensitivity and designing a correlated data releasing mechanism. With consideration of the correlated levels between records, the proposed correlated sensitivity can significantly decrease the noise compared with traditional global sensitivity. The correlated data releasing mechanism correlated iteration mechanism is designed based on an iterative method to answer a large number of queries. Compared with the traditional method, the proposed correlated differential privacy solution enhances the privacy guarantee for a correlated data set with less accuracy cost. Experimental results show that the proposed solution outperforms traditional differential privacy in terms of mean square error on large group of queries. This also suggests the correlated differential privacy can successfully retain the utility while preserving the privacy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Instrumentation and automation plays a vital role to managing the water industry. These systems generate vast amounts of data that must be effectively managed in order to enable intelligent decision making. Time series data management software, commonly known as data historians are used for collecting and managing real-time (time series) information. More advanced software solutions provide a data infrastructure or utility wide Operations Data Management System (ODMS) that stores, manages, calculates, displays, shares, and integrates data from multiple disparate automation and business systems that are used daily in water utilities. These ODMS solutions are proven and have the ability to manage data from smart water meters to the collaboration of data across third party corporations. This paper focuses on practical, utility successes in the water industry where utility managers are leveraging instantaneous access to data from proven, commercial off-the-shelf ODMS solutions to enable better real-time decision making. Successes include saving $650,000 / year in water loss control, safeguarding water quality, saving millions of dollars in energy management and asset management. Immediate opportunities exist to integrate the research being done in academia with these ODMS solutions in the field and to leverage these successes to utilities around the world.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of the present article is to identify and discuss the possibilities of using qualitative data analysis software in the framework of procedures proposed by SDI (socio-discursive interactionism), emphasizing free distribuited software or free versions of commercial software. A literature review of software for qualitative data analysis in the area of social sciences and humanities, focusing on language studies is presented. Some tools, such as: Wef-tQDA, MLCT, Yoshikoder and Tropes are examined with their respective features and functions. The software called Tropes is examined in more detail because of its particular relation with language and semantic analysis, as well as its embeded classification of linguistic elements such as, types of verbs, adjectives, modalizations, etc. Although trying to completely automate an SDI based analysis is not feasible, the programs appear to be powerful helpers in analyzing specific questions. Still, it seems important to be familiar with software options and use different applications in order to obtain a more diversified vision of the data. It is up to the researcher to be critical of the analysis provided by the machine.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[ES] Las necesidades básicas de las empresas suelen ser las mismas, ya sea una empresa grande que pequeña, la infraestructura sobre la que montan sus procesos de negocio y las aplicaciones para gestionarlos suelen ser casi iguales. Si dividimos la infraestructura TIC de una empresa en hardware, sistema y aplicaciones, podemos ver que en la mayoría de ellas el sistema es casi idéntico. Además, gracias a la virtualización, que ha entrado de manera arrolladora en el mundo de la informática, podemos independizar totalmente el software del hardware, de forma que obtenemos una flexibilidad enorme a la hora de planificar despliegues de infraestructura. Sobre estas dos ideas, uniformidad de sistema e independencia de hardware, son sobre las que se va a desarrollar el siguiente TFG. Para el desarrollo de la primera de ellas se realizará el estudio de la infraestructura básica ( sistema) que cualquier empresa suele tener. Se intentará dar una solución que sea válida para una gran cantidad de empresas de nuestro entorno y se realizará el diseño del mismo. Con la segunda idea desarrollaremos un sistema basado en servicios, que sea lo suficientemente completa para poder dar respuesta a las necesidades vistas pero, a su vez, suficientemente flexible para que el crecimiento en capacidades o servicios se pueda realizar de forma sencilla sin que la estructura del sistema, o sus módulos deban modificarse para realizarlos. Por tanto, vamos a realizar un diseño integral y completa, de forma que será tanto de hardware como de software, haciendo énfasis en la integración de los sistemas y la interrelación entre los distintos elementos de ellos. Se dará, a su vez, la valoración económica del mismo. Por último, y como ejemplo de la flexibilidad del diseño elegido veremos dos modificaciones sobre el diseño original. El primero de ellos será una ampliación para dar mayor seguridad en cuanto a redundancia de almacenamiento y, ya en un paso definitivo, montar un CPD remoto. El segundo de ellos será un diseño de bajo coste, en el que, mantenimiento los mismos servicios, bajaremos el coste del diseño con productos con algo menos de prestaciones, pero manteniendo la solución en conjunto unos altos niveles de calidad y servicio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis describes the implementation of a calibration, format-translation and data conditioning software for radiometric tracking data of deep-space spacecraft. All of the available propagation-media noise rejection techniques available as features in the code are covered in their mathematical formulations, performance and software implementations. Some techniques are retrieved from literature and current state of the art, while other algorithms have been conceived ex novo. All of the three typical deep-space refractive environments (solar plasma, ionosphere, troposphere) are dealt with by employing specific subroutines. Specific attention has been reserved to the GNSS-based tropospheric path delay calibration subroutine, since it is the most bulky module of the software suite, in terms of both the sheer number of lines of code, and development time. The software is currently in its final stage of development and once completed will serve as a pre-processing stage for orbit determination codes. Calibration of transmission-media noise sources in radiometric observables proved to be an essential operation to be performed of radiometric data in order to meet the more and more demanding error budget requirements of modern deep-space missions. A completely autonomous and all-around propagation-media calibration software is a novelty in orbit determination, although standalone codes are currently employed by ESA and NASA. The described S/W is planned to be compatible with the current standards for tropospheric noise calibration used by both these agencies like the AMC, TSAC and ESA IFMS weather data, and it natively works with the Tracking Data Message file format (TDM) adopted by CCSDS as standard aimed to promote and simplify inter-agency collaboration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Moderne ESI-LC-MS/MS-Techniken erlauben in Verbindung mit Bottom-up-Ansätzen eine qualitative und quantitative Charakterisierung mehrerer tausend Proteine in einem einzigen Experiment. Für die labelfreie Proteinquantifizierung eignen sich besonders datenunabhängige Akquisitionsmethoden wie MSE und die IMS-Varianten HDMSE und UDMSE. Durch ihre hohe Komplexität stellen die so erfassten Daten besondere Anforderungen an die Analysesoftware. Eine quantitative Analyse der MSE/HDMSE/UDMSE-Daten blieb bislang wenigen kommerziellen Lösungen vorbehalten. rn| In der vorliegenden Arbeit wurden eine Strategie und eine Reihe neuer Methoden zur messungsübergreifenden, quantitativen Analyse labelfreier MSE/HDMSE/UDMSE-Daten entwickelt und als Software ISOQuant implementiert. Für die ersten Schritte der Datenanalyse (Featuredetektion, Peptid- und Proteinidentifikation) wird die kommerzielle Software PLGS verwendet. Anschließend werden die unabhängigen PLGS-Ergebnisse aller Messungen eines Experiments in einer relationalen Datenbank zusammengeführt und mit Hilfe der dedizierten Algorithmen (Retentionszeitalignment, Feature-Clustering, multidimensionale Normalisierung der Intensitäten, mehrstufige Datenfilterung, Proteininferenz, Umverteilung der Intensitäten geteilter Peptide, Proteinquantifizierung) überarbeitet. Durch diese Nachbearbeitung wird die Reproduzierbarkeit der qualitativen und quantitativen Ergebnisse signifikant gesteigert.rn| Um die Performance der quantitativen Datenanalyse zu evaluieren und mit anderen Lösungen zu vergleichen, wurde ein Satz von exakt definierten Hybridproteom-Proben entwickelt. Die Proben wurden mit den Methoden MSE und UDMSE erfasst, mit Progenesis QIP, synapter und ISOQuant analysiert und verglichen. Im Gegensatz zu synapter und Progenesis QIP konnte ISOQuant sowohl eine hohe Reproduzierbarkeit der Proteinidentifikation als auch eine hohe Präzision und Richtigkeit der Proteinquantifizierung erreichen.rn| Schlussfolgernd ermöglichen die vorgestellten Algorithmen und der Analyseworkflow zuverlässige und reproduzierbare quantitative Datenanalysen. Mit der Software ISOQuant wurde ein einfaches und effizientes Werkzeug für routinemäßige Hochdurchsatzanalysen labelfreier MSE/HDMSE/UDMSE-Daten entwickelt. Mit den Hybridproteom-Proben und den Bewertungsmetriken wurde ein umfassendes System zur Evaluierung quantitativer Akquisitions- und Datenanalysesysteme vorgestellt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software repositories have been getting a lot of attention from researchers in recent years. In order to analyze software repositories, it is necessary to first extract raw data from the version control and problem tracking systems. This poses two challenges: (1) extraction requires a non-trivial effort, and (2) the results depend on the heuristics used during extraction. These challenges burden researchers that are new to the community and make it difficult to benchmark software repository mining since it is almost impossible to reproduce experiments done by another team. In this paper we present the TA-RE corpus. TA-RE collects extracted data from software repositories in order to build a collection of projects that will simplify extraction process. Additionally the collection can be used for benchmarking. As the first step we propose an exchange language capable of making sharing and reusing data as simple as possible.

Relevância:

100.00% 100.00%

Publicador: