917 resultados para LHC,CMS,Big Data
Resumo:
Results of a search for new phenomena in events with an energetic photon and large missing transverse momentum with the ATLAS experiment at the LHC are reported. Data were collected in proton--proton collisions at a center-of-mass energy of 8 TeV and correspond to an integrated luminosity of 20.3 fb−1. The observed data are well described by the expected Standard Model backgrounds. The expected (observed) upper limit on the fiducial cross section for the production of such events is 6.1 (5.3) fb at 95% confidence level. Exclusion limits are presented on models of new phenomena with large extra spatial dimensions, supersymmetric quarks, and direct pair production of dark-matter candidates.
Resumo:
Results of a search for new phenomena in events with large missing transverse momentum and a Higgs boson decaying to two photons are reported. Data from proton--proton collisions at a center-of-mass energy of 8 TeV and corresponding to an integrated luminosity of 20.3 fb−1 have been collected with the ATLAS detector at the LHC. The observed data are well described by the expected Standard Model backgrounds. Upper limits on the cross section of events with large missing transverse momentum and a Higgs boson candidate are also placed. Exclusion limits are presented for models of physics beyond the Standard Model featuring dark-matter candidates.
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
DnaSP is a software package for the analysis of DNA polymorphism data. Present version introduces several new modules and features which, among other options allow: (1) handling big data sets (~5 Mb per sequence); (2) conducting a large number of coalescent-based tests by Monte Carlo computer simulations; (3) extensive analyses of the genetic differentiation and gene flow among populations; (4) analysing the evolutionary pattern of preferred and unpreferred codons; (5) generating graphical outputs for an easy visualization of results. Availability: The software package, including complete documentation and examples, is freely available to academic users from: http://www.ub.es/dnasp
Resumo:
The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.
Resumo:
The European Space Agency's Gaia mission will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), by providing unprecedented position, parallax, proper motion, and radial velocity measurements for about one billion stars. The resulting catalogue will be made available to the scientific community and will be analyzed in many different ways, including the production of a variety of statistics. The latter will often entail the generation of multidimensional histograms and hypercubes as part of the precomputed statistics for each data release, or for scientific analysis involving either the final data products or the raw data coming from the satellite instruments. In this paper we present and analyze a generic framework that allows the hypercube generation to be easily done within a MapReduce infrastructure, providing all the advantages of the new Big Data analysis paradigmbut without dealing with any specific interface to the lower level distributed system implementation (Hadoop). Furthermore, we show how executing the framework for different data storage model configurations (i.e. row or column oriented) and compression techniques can considerably improve the response time of this type of workload for the currently available simulated data of the mission. In addition, we put forward the advantages and shortcomings of the deployment of the framework on a public cloud provider, benchmark against other popular solutions available (that are not always the best for such ad-hoc applications), and describe some user experiences with the framework, which was employed for a number of dedicated astronomical data analysis techniques workshops.
Resumo:
The final year project came to us as an opportunity to get involved in a topic which has appeared to be attractive during the learning process of majoring in economics: statistics and its application to the analysis of economic data, i.e. econometrics.Moreover, the combination of econometrics and computer science is a very hot topic nowadays, given the Information Technologies boom in the last decades and the consequent exponential increase in the amount of data collected and stored day by day. Data analysts able to deal with Big Data and to find useful results from it are verydemanded in these days and, according to our understanding, the work they do, although sometimes controversial in terms of ethics, is a clear source of value added both for private corporations and the public sector. For these reasons, the essence of this project is the study of a statistical instrument valid for the analysis of large datasets which is directly related to computer science: Partial Correlation Networks.The structure of the project has been determined by our objectives through the development of it. At first, the characteristics of the studied instrument are explained, from the basic ideas up to the features of the model behind it, with the final goal of presenting SPACE model as a tool for estimating interconnections in between elements in large data sets. Afterwards, an illustrated simulation is performed in order to show the power and efficiency of the model presented. And at last, the model is put into practice by analyzing a relatively large data set of real world data, with the objective of assessing whether the proposed statistical instrument is valid and useful when applied to a real multivariate time series. In short, our main goals are to present the model and evaluate if Partial Correlation Network Analysis is an effective, useful instrument and allows finding valuable results from Big Data.As a result, the findings all along this project suggest the Partial Correlation Estimation by Joint Sparse Regression Models approach presented by Peng et al. (2009) to work well under the assumption of sparsity of data. Moreover, partial correlation networks are shown to be a very valid tool to represent cross-sectional interconnections in between elements in large data sets.The scope of this project is however limited, as there are some sections in which deeper analysis would have been appropriate. Considering intertemporal connections in between elements, the choice of the tuning parameter lambda, or a deeper analysis of the results in the real data application are examples of aspects in which this project could be completed.To sum up, the analyzed statistical tool has been proved to be a very useful instrument to find relationships that connect the elements present in a large data set. And after all, partial correlation networks allow the owner of this set to observe and analyze the existing linkages that could have been omitted otherwise.
Resumo:
Hoy en día, las posibilidades del Big Data son incontables. Existe gran cantidad de información generada por la población general y disponible de forma pública.El reto consiste en poder trabajar con esta información y extraer conclusiones útiles y que generen valor.En este proyecto, queremos analizar en el tiempo el interés general de la población respecto a una enfermedad común como la gripe, y poder relacionarlos con brotes de gripe existentes en el pasado, para de esta manera, poder extrapolar y predecir futuros brotes.Esta información, en manos de las autoridades sanitarias, puede ser de gran ayuda para poder prevenir picos de solicitudes en los servicios de urgencias, anticipándose para gestionar de manera más eficaz los recursos disponibles, consiguiendo, de esta manera, un mejor servicio a la población en general.De esta manera, son los propios usuarios los que, sin saberlo, posibilitan una mayor y mejor respuesta en los servicios sanitarios mediante la información que ellos mismos distribuyen libremente, consiguiéndose de esta manera valiosos beneficios para la población general.
Resumo:
En este proyecto queremos analizar en el tiempo el interés general de la población respecto a una enfermedad común como la gripe, y poder relacionarlos con brotes de gripe existentes en el pasado, para de esta manera, poder extrapolar y predecir futuros brotes. Esta información, en manos de las autoridades sanitarias, puede ser de gran ayuda para poder prevenir picos de solicitudes en los servicios de urgencias, anticipándose para gestionar de manera más eficaz los recursos disponibles, consiguiendo, de esta manera, un mejor servicio a la población en general.
Resumo:
Estudi de viabilitat sobre la implantació d'un software-defined storage open source en entorns empresarials. Comparativa entre Gluster, Ceph, OpenAFS, TahoeFS i XtreemFS.
Resumo:
Proceedings of Internet, Law and Politics. A decade of transformations.
Resumo:
Este proyecto consiste en diseñar e implementar un sistema de información alojado en una base de datos Oracle, con el fin de dar respuesta al proyecto Big Data, cuyo objetivo es cruzar los datos de salud y los datos de actividad física de los ciudadanos europeos.
Resumo:
Peer-reviewed
Resumo:
The significance of services as business and human activities has increased dramatically throughout the world in the last three decades. Becoming a more and more competitive and efficient service provider while still being able to provide unique value opportunities for customers requires new knowledge and ideas. Part of this knowledge is created and utilized in daily activities in every service organization, but not all of it, and therefore an emerging phenomenon in the service context is information awareness. Terms like big data and Internet of things are not only modern buzz-words but they are also describing urgent requirements for a new type of competences and solutions. When the amount of information increases and the systems processing information become more efficient and intelligent, it is the human understanding and objectives that may get separated from the automated processes and technological innovations. This is an important challenge and the core driver for this dissertation: What kind of information is created, possessed and utilized in the service context, and even more importantly, what information exists but is not acknowledged or used? In this dissertation the focus is on the relationship between service design and service operations. Reframing this relationship refers to viewing the service system from the architectural perspective. The selected perspective allows analysing the relationship between design activities and operational activities as an information system while maintaining the tight connection to existing service research contributions and approaches. This type of an innovative approach is supported by research methodology that relies on design science theory. The methodological process supports the construction of a new design artifact based on existing theoretical knowledge, creation of new innovations and testing the design artifact components in real service contexts. The relationship between design and operations is analysed in the health care and social care service systems. The existing contributions in service research tend to abstract services and service systems as value creation, working or interactive systems. This dissertation adds an important information processing system perspective to the research. The main contribution focuses on the following argument: Only part of the service information system is automated and computerized, whereas a significant part of information processing is embedded in human activities, communication and ad-hoc reactions. The results indicate that the relationship between service design and service operations is more complex and dynamic than the existing scientific and managerial models tend to view it. Both activities create, utilize, mix and share information, making service information management a necessary but relatively unknown managerial task. On the architectural level, service system -specific elements seem to disappear, but access to more general information elements and processes can be found. While this dissertation focuses on conceptual-level design artifact construction, the results provide also very practical implications for service providers. Personal, visual and hidden activities of service, and more importantly all changes that take place in any service system have also an information dimension. Making this information dimension visual and prioritizing the processed information based on service dimensions is likely to provide new opportunities to increase activities and provide a new type of service potential for customers.