968 resultados para Statistics - Data processing
Resumo:
Résumé : L’entrainement sportif est « un processus de perfectionnement de l’athlète dirigé selon des principes scientifiques et qui, par des influences planifiées et systématiques (charges) sur la capacité de performance, vise à mener le sportif vers des performances élevées et supérieures dans un sport ou une discipline sportive » (Harre, 1982). Un entrainement sportif approprié devrait commencer dès l’enfance. Ainsi, le jeune sportif pourrait progressivement et systématiquement développer son corps et son esprit afin d’atteindre l’excellence sportive (Bompa, 2000; Weineck, 1997). Or plusieurs entraineurs, dans leur tentative de parvenir à des résultats de haut niveau rapidement, exposent les jeunes athlètes à une formation sportive très spécifique et rigoureuse, sans prendre le temps de développer convenablement les aptitudes physiques et motrices et les habiletés motrices fondamentales sous-jacentes aux habiletés sportives spécifiques (Bompa, 2000), d’où l’appellation « spécialisation hâtive ». Afin de contrer les conséquences néfastes de la spécialisation hâtive, de nouvelles approches d’entrainement ont été proposées. Une des façons d’y arriver consisterait notamment à pratiquer différents sports en bas âge (Fraser-Thomas, Côté et Deakin, 2008; Gould et Carson, 2004; Judge et Gilreath, 2009; LeBlanc et Dickson, 1997; Mostafavifar, Best et Myer, 2013), d’où l’appellation « diversification sportive ». Plusieurs organisations sportives et professionnelles ont décidé de valoriser et de mettre en place des programmes basés sur la diversification sportive (Kaleth et Mikesky, 2010). C’est donc à la suite d’une prise de conscience des effets néfastes de la spécialisation hâtive que des professionnels de l’activité physique d’une école secondaire du Québec (éducateur physique, kinésiologue et agent de développement sportif) ont mis en place un programme multisports-études novateur au premier cycle du secondaire, inspiré des sciences du sport et des lignes directrices du modèle de développement à long terme de l’athlète (DLTA) (Balyi, Cardinal, Higgs, Norris et Way, 2005). Le présent projet de recherche porte sur le développement des aptitudes physiques et motrices chez de jeunes sportifs inscrits à un programme de spécialisation sportive et de jeunes sportifs inscrits à un programme de diversification sportive à l’étape « S’entrainer à s’entrainer » (12 à 16 ans) du modèle de développement à long terme de l’athlète (Balyi et al., 2005). L’objectif principal de cette étude est de rendre compte de l’évolution des aptitudes physiques et motrices de jeunes élèves-athlètes inscrits, d’une part, à un programme sport-études soccer (spécialisation) et, d’autre part, à un programme multisports-études (diversification). Plus spécifiquement, cette étude tente de (a) dresser un portrait détaillé de l’évolution des aptitudes physiques et motrices des élèves-athlètes de chaque programme et de faire un parallèle avec la planification annuelle de chaque programme sportif et (b) de rendre compte des différences d’aptitudes physiques et motrices observées entre les deux programmes. Le projet de recherche a été réalisé dans une école secondaire de la province de Québec. Au total, 53 élèves-athlètes de première secondaire ont été retenus pour le projet de recherche selon leur volonté de participer à l’étude, soit 23 élèves-athlètes de première secondaire inscrits au programme sport-études soccer et 30 élèves-athlètes de première secondaire inscrits au programme multisports-études. Les élèves-athlètes étaient tous âgés de 11 à 13 ans. Treize épreuves standardisées d’aptitudes physiques et motrices ont été administrées aux élèves-athlètes des deux programmes sportifs en début, en milieu et en fin d’année scolaire. Le traitement des données s’est effectué à l’aide de statistiques descriptives et d’une analyse de variance à mesures répétées. Les résultats révèlent que (a) l’ensemble des aptitudes physiques et motrices des élèves-athlètes des deux programmes sportifs se sont améliorées au cours de l’année scolaire, (b) il est relativement facile de faire un parallèle entre l’évolution des aptitudes physiques et motrices des élèves-athlètes et la planification annuelle de chaque programme sportif, (c) les élèves-athlètes du programme multisports-études ont, en général, des performances semblables à celles des élèves-athlètes du programme sport-études soccer et (d) les élèves-athlètes du programme sport-études soccer ont, au cours de l’année scolaire, amélioré davantage leur endurance cardiorespiratoire, alors que ceux du programme multisports-études ont amélioré davantage (a) leur vitesse segmentaire des bras, (b) leur agilité à l’épreuve de course en cercle et (c) leur puissance musculaire des membres inférieurs, confirmant ainsi que les aptitudes physiques et motrices développées chez de jeunes athlètes qui se spécialisent tôt sont plutôt spécifiques au sport pratiqué (Balyi et al., 2005; Bompa, 1999; Cloes, Delfosse, Ledent et Piéron, 1994; Mattson et Richards, 2010), alors que celles développées à travers la diversification sportive sont davantage diversifiées (Coakley, 2010; Gould et Carson, 2004; White et Oatman, 2009). Ces résultats peuvent s’expliquer par (a) la spécificité ou la diversité des tâches proposées durant les séances d’entrainement, (b) le temps consacré à chacune de ces tâches et (c) les exigences reliées à la pratique du soccer comparativement aux exigences reliées à la pratique de plusieurs disciplines sportives. Toutefois, les résultats obtenus restent complexes à interpréter en raison de différents biais : (a) la maturation physique, (b) le nombre d’heures d’entrainement effectué au cours de l’année scolaire précédente, (c) le nombre d’heures d’entrainement offert par les deux programmes sportifs à l’étude et (d) les activités physiques et sportives pratiquées à l’extérieur de l’école. De plus, cette étude ne permet pas d’évaluer la qualité des interventions et des exercices proposés lors des entrainements ni la motivation des élèves-athlètes à prendre part aux séances d’entrainement ou aux épreuves physiques et motrices. Finalement, il serait intéressant de reprendre la présente étude auprès de disciplines sportives différentes et de mettre en évidence les contributions particulières de chaque discipline sportive sur le développement des aptitudes physiques et motrices de jeunes athlètes.
Resumo:
Recent advances in the massively parallel computational abilities of graphical processing units (GPUs) have increased their use for general purpose computation, as companies look to take advantage of big data processing techniques. This has given rise to the potential for malicious software targeting GPUs, which is of interest to forensic investigators examining the operation of software. The ability to carry out reverse-engineering of software is of great importance within the security and forensics elds, particularly when investigating malicious software or carrying out forensic analysis following a successful security breach. Due to the complexity of the Nvidia CUDA (Compute Uni ed Device Architecture) framework, it is not clear how best to approach the reverse engineering of a piece of CUDA software. We carry out a review of the di erent binary output formats which may be encountered from the CUDA compiler, and their implications on reverse engineering. We then demonstrate the process of carrying out disassembly of an example CUDA application, to establish the various techniques available to forensic investigators carrying out black-box disassembly and reverse engineering of CUDA binaries. We show that the Nvidia compiler, using default settings, leaks useful information. Finally, we demonstrate techniques to better protect intellectual property in CUDA algorithm implementations from reverse engineering.
Resumo:
As escolas portuguesas do ensino não superior estão dotadas com infraestruturas e equipamentos que permitem trazer o mundo para dentro da sala de aula, tornando o processo de ensino e de aprendizagem mais rico e motivador para os alunos. A adoção institucional de uma plataforma que segue os princípios da web social, o SAPO Campus (SC), definida pela abertura, partilha, integração, inovação e personalização, pode ser catalisadora de processos de mudança e inovação. O presente estudo teve como finalidade acompanhar o processo de adoção do SC em cinco escolas, bem como analisar o impacto no processo de ensino e de aprendizagem e a forma como os alunos e professores se relacionam com esta tecnologia. As escolas envolvidas foram divididas em dois grupos: o primeiro grupo, constituído por três escolas onde o acompanhamento teve uma natureza mais interventiva e presente, enquanto que no segundo grupo, composto por duas escolas, foram apenas observadas as dinâmicas que se desenvolveram no processo de adoção e utilização do SC. No presente estudo, que se assume como um estudo longitudinal de multicasos, foram aplicadas técnicas de tratamento de dados como a estatística descritiva, a análise de conteúdo e a Social Network Analysis (SNA), com o objetivo de, através de uma triangulação permanente, proceder a uma análise dos impactos observados pela utilização do SC. Estes impactos podem ser situados em três níveis diferentes: relativos à instituição, aos professores e aos alunos. Ao nível da adoção institucional de uma tecnologia, verificou-se que essa adoção passa uma mensagem a toda a organização e que, no caso do SC, apela à participação coletiva num ambiente aberto onde as hierarquias se dissipam. Verificou-se ainda que deve implicar o envolvimento dos alunos em atividades significativas e a adoção de estratégias dinâmicas, preferencialmente integradas num projeto mobilizador. A adoção do SC foi ainda catalisadora de dinâmicas que provocaram mudanças nos padrões de consumo e de produção de conteúdos bem como de uma atitude diferente perante o papel da web social no processo de ensino e aprendizagem. As conclusões apontam ainda no sentido da identificação de um conjunto de fatores, observados no estudo, que tiveram impacto no processo de adoção como o papel das lideranças, a importância da formação de professores, a cultura das escolas, a integração num projeto pedagógico e, a um nível mais primário, as questões do acesso à tecnologia. Algumas comunidades construídas à volta do SAPO Campus, envolvendo professores, alunos e a comunidade, evoluíram no sentido da autossustentação, num percurso de reflexão sobre as práticas pedagógicas e partilha de experiências.
Resumo:
The only method used to date to measure dissolved nitrate concentration (NITRATE) with sensors mounted on profiling floats is based on the absorption of light at ultraviolet wavelengths by nitrate ion (Johnson and Coletti, 2002; Johnson et al., 2010; 2013; D’Ortenzio et al., 2012). Nitrate has a modest UV absorption band with a peak near 210 nm, which overlaps with the stronger absorption band of bromide, which has a peak near 200 nm. In addition, there is a much weaker absorption due to dissolved organic matter and light scattering by particles (Ogura and Hanya, 1966). The UV spectrum thus consists of three components, bromide, nitrate and a background due to organics and particles. The background also includes thermal effects on the instrument and slow drift. All of these latter effects (organics, particles, thermal effects and drift) tend to be smooth spectra that combine to form an absorption spectrum that is linear in wavelength over relatively short wavelength spans. If the light absorption spectrum is measured in the wavelength range around 217 to 240 nm (the exact range is a bit of a decision by the operator), then the nitrate concentration can be determined. Two different instruments based on the same optical principles are in use for this purpose. The In Situ Ultraviolet Spectrophotometer (ISUS) built at MBARI or at Satlantic has been mounted inside the pressure hull of a Teledyne/Webb Research APEX and NKE Provor profiling floats and the optics penetrate through the upper end cap into the water. The Satlantic Submersible Ultraviolet Nitrate Analyzer (SUNA) is placed on the outside of APEX, Provor, and Navis profiling floats in its own pressure housing and is connected to the float through an underwater cable that provides power and communications. Power, communications between the float controller and the sensor, and data processing requirements are essentially the same for both ISUS and SUNA. There are several possible algorithms that can be used for the deconvolution of nitrate concentration from the observed UV absorption spectrum (Johnson and Coletti, 2002; Arai et al., 2008; Sakamoto et al., 2009; Zielinski et al., 2011). In addition, the default algorithm that is available in Satlantic sensors is a proprietary approach, but this is not generally used on profiling floats. There are some tradeoffs in every approach. To date almost all nitrate sensors on profiling floats have used the Temperature Compensated Salinity Subtracted (TCSS) algorithm developed by Sakamoto et al. (2009), and this document focuses on that method. It is likely that there will be further algorithm development and it is necessary that the data systems clearly identify the algorithm that is used. It is also desirable that the data system allow for recalculation of prior data sets using new algorithms. To accomplish this, the float must report not just the computed nitrate, but the observed light intensity. Then, the rule to obtain only one NITRATE parameter is, if the spectrum is present then, the NITRATE should be recalculated from the spectrum while the computation of nitrate concentration can also generate useful diagnostics of data quality.
Resumo:
The CATARINA Leg1 cruise was carried out from June 22 to July 24 2012 on board the B/O Sarmiento de Gamboa, under the scientific supervision of Aida Rios (CSIC-IIM). It included the occurrence of the OVIDE hydrological section that was performed in June 2002, 2004, 2006, 2008 and 2010, as part of the CLIVAR program (name A25) ), and under the supervision of Herlé Mercier (CNRSLPO). This section begins near Lisbon (Portugal), runs through the West European Basin and the Iceland Basin, crosses the Reykjanes Ridge (300 miles north of Charlie-Gibbs Fracture Zone, and ends at Cape Hoppe (southeast tip of Greenland). The objective of this repeated hydrological section is to monitor the variability of water mass properties and main current transports in the basin, complementing the international observation array relevant for climate studies. In addition, the Labrador Sea was partly sampled (stations 101-108) between Greenland and Newfoundland, but heavy weather conditions prevented the achievement of the section south of 53°40’N. The quality of CTD data is essential to reach the first objective of the CATARINA project, i.e. to quantify the Meridional Overturning Circulation and water mass ventilation changes and their effect on the changes in the anthropogenic carbon ocean uptake and storage capacity. The CATARINA project was mainly funded by the Spanish Ministry of Sciences and Innovation and co-funded by the Fondo Europeo de Desarrollo Regional. The hydrological OVIDE section includes 95 surface-bottom stations from coast to coast, collecting profiles of temperature, salinity, oxygen and currents, spaced by 2 to 25 Nm depending on the steepness of the topography. The position of the stations closely follows that of OVIDE 2002. In addition, 8 stations were carried out in the Labrador Sea. From the 24 bottles closed at various depth at each stations, samples of sea water are used for salinity and oxygen calibration, and for measurements of biogeochemical components that are not reported here. The data were acquired with a Seabird CTD (SBE911+) and an SBE43 for the dissolved oxygen, belonging to the Spanish UTM group. The software SBE data processing was used after decoding and cleaning the raw data. Then, the LPO matlab toolbox was used to calibrate and bin the data as it was done for the previous OVIDE cruises, using on the one hand pre and post-cruise calibration results for the pressure and temperature sensors (done at Ifremer) and on the other hand the water samples of the 24 bottles of the rosette at each station for the salinity and dissolved oxygen data. A final accuracy of 0.002°C, 0.002 psu and 0.04 ml/l (2.3 umol/kg) was obtained on final profiles of temperature, salinity and dissolved oxygen, compatible with international requirements issued from the WOCE program.
Resumo:
This study aims to characterize the National Long-Term Care Network (NL-TCN) users. The Portuguese National Health Service, was restructured in 2006 with the creation of the National Long-Term Care Network to respond to new health and social needs concerning the continuity of care. Objectives- Analyse the sociodemographic profile of the network users and the review of hospital, local and regional management procedures. Methods-we used various methods of observational or experimental nature (data processing and presentation of results with the program Statistical Package for Social Sciences, version 20, descriptive statistics (frequencies, crosstabs and test chi-square)). The Pearson correlation test showed a positive correlation between time procedures at the local and regional management and hospital’s length of stay. Results- from a sample of 805 cases, 595 (74%) were admitted in the NL-TCN, a rate lower than the national average (86%). Almost half of the sample was admitted in Rehabilitation Units (46%), while nationally the highest number of admissions was in Home Care Teams (30%). The average time from hospital referral to network admission was 9.73 days with a positive correlation between referred network management procedures and hospital length of stay. Conclusions- For specialized units, the maximum waiting times were for the Long-Term and Support Units (mean 30.27 days) and the minimum waiting times were for Home Care Teams (mean 5.57 days). The average time between the local and regional management was 3.59 days. Almost 90% of referrals were orthopaedics, internal medicine and neurology and Network users were mostly elderly (average 75 years old), female and married. Most users were admitted to inpatient units (78%) and only 15% remained in their home town.
Resumo:
This thesis builds a framework for evaluating downside risk from multivariate data via a special class of risk measures (RM). The peculiarity of the analysis lies in getting rid of strong data distributional assumptions and in orientation towards the most critical data in risk management: those with asymmetries and heavy tails. At the same time, under typical assumptions, such as the ellipticity of the data probability distribution, the conformity with classical methods is shown. The constructed class of RM is a multivariate generalization of the coherent distortion RM, which possess valuable properties for a risk manager. The design of the framework is twofold. The first part contains new computational geometry methods for the high-dimensional data. The developed algorithms demonstrate computability of geometrical concepts used for constructing the RM. These concepts bring visuality and simplify interpretation of the RM. The second part develops models for applying the framework to actual problems. The spectrum of applications varies from robust portfolio selection up to broader spheres, such as stochastic conic optimization with risk constraints or supervised machine learning.
Resumo:
In the digital age, e-health technologies play a pivotal role in the processing of medical information. As personal health data represents sensitive information concerning a data subject, enhancing data protection and security of systems and practices has become a primary concern. In recent years, there has been an increasing interest in the concept of Privacy by Design, which aims at developing a product or a service in a way that it supports privacy principles and rules. In the EU, Article 25 of the General Data Protection Regulation provides a binding obligation of implementing Data Protection by Design technical and organisational measures. This thesis explores how an e-health system could be developed and how data processing activities could be carried out to apply data protection principles and requirements from the design stage. The research attempts to bridge the gap between the legal and technical disciplines on DPbD by providing a set of guidelines for the implementation of the principle. The work is based on literature review, legal and comparative analysis, and investigation of the existing technical solutions and engineering methodologies. The work can be differentiated by theoretical and applied perspectives. First, it critically conducts a legal analysis on the principle of PbD and it studies the DPbD legal obligation and the related provisions. Later, the research contextualises the rule in the health care field by investigating the applicable legal framework for personal health data processing. Moreover, the research focuses on the US legal system by conducting a comparative analysis. Adopting an applied perspective, the research investigates the existing technical methodologies and tools to design data protection and it proposes a set of comprehensive DPbD organisational and technical guidelines for a crucial case study, that is an Electronic Health Record system.
Resumo:
The aim of this novel experimental study is to investigate the behaviour of a 2m x 2m model of a masonry groin vault, which is built by the assembly of blocks made of a 3D-printed plastic skin filled with mortar. The choice of the groin vault is due to the large presence of this vulnerable roofing system in the historical heritage. Experimental tests on the shaking table are carried out to explore the vault response on two support boundary conditions, involving four lateral confinement modes. The data processing of markers displacement has allowed to examine the collapse mechanisms of the vault, based on the arches deformed shapes. There then follows a numerical evaluation, to provide the orders of magnitude of the displacements associated to the previous mechanisms. Given that these displacements are related to the arches shortening and elongation, the last objective is the definition of a critical elongation between two diagonal bricks and consequently of a diagonal portion. This study aims to continue the previous work and to take another step forward in the research of ground motion effects on masonry structures.
Resumo:
With the CERN LHC program underway, there has been an acceleration of data growth in the High Energy Physics (HEP) field and the usage of Machine Learning (ML) in HEP will be critical during the HL-LHC program when the data that will be produced will reach the exascale. ML techniques have been successfully used in many areas of HEP nevertheless, the development of a ML project and its implementation for production use is a highly time-consuming task and requires specific skills. Complicating this scenario is the fact that HEP data is stored in ROOT data format, which is mostly unknown outside of the HEP community. The work presented in this thesis is focused on the development of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTP calls. These pipelines are executed by using the MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. Such a solution provides HEP users non-expert in ML with a tool that allows them to apply ML techniques in their analyses in a streamlined manner. Over the years the MLaaS4HEP framework has been developed, validated, and tested and new features have been added. A first MLaaS solution has been developed by automatizing the deployment of a platform equipped with the MLaaS4HEP framework. Then, a service with APIs has been developed, so that a user after being authenticated and authorized can submit MLaaS4HEP workflows producing trained ML models ready for the inference phase. A working prototype of this service is currently running on a virtual machine of INFN-Cloud and is compliant to be added to the INFN Cloud portfolio of services.
Resumo:
The thesis represents the conclusive outcome of the European Joint Doctorate programmein Law, Science & Technology funded by the European Commission with the instrument Marie Skłodowska-Curie Innovative Training Networks actions inside of the H2020, grantagreement n. 814177. The tension between data protection and privacy from one side, and the need of granting further uses of processed personal datails is investigated, drawing the lines of the technological development of the de-anonymization/re-identification risk with an explorative survey. After acknowledging its span, it is questioned whether a certain degree of anonymity can still be granted focusing on a double perspective: an objective and a subjective perspective. The objective perspective focuses on the data processing models per se, while the subjective perspective investigates whether the distribution of roles and responsibilities among stakeholders can ensure data anonymity.
Resumo:
This thesis investigates the legal, ethical, technical, and psychological issues of general data processing and artificial intelligence practices and the explainability of AI systems. It consists of two main parts. In the initial section, we provide a comprehensive overview of the big data processing ecosystem and the main challenges we face today. We then evaluate the GDPR’s data privacy framework in the European Union. The Trustworthy AI Framework proposed by the EU’s High-Level Expert Group on AI (AI HLEG) is examined in detail. The ethical principles for the foundation and realization of Trustworthy AI are analyzed along with the assessment list prepared by the AI HLEG. Then, we list the main big data challenges the European researchers and institutions identified and provide a literature review on the technical and organizational measures to address these challenges. A quantitative analysis is conducted on the identified big data challenges and the measures to address them, which leads to practical recommendations for better data processing and AI practices in the EU. In the subsequent part, we concentrate on the explainability of AI systems. We clarify the terminology and list the goals aimed at the explainability of AI systems. We identify the reasons for the explainability-accuracy trade-off and how we can address it. We conduct a comparative cognitive analysis between human reasoning and machine-generated explanations with the aim of understanding how explainable AI can contribute to human reasoning. We then focus on the technical and legal responses to remedy the explainability problem. In this part, GDPR’s right to explanation framework and safeguards are analyzed in-depth with their contribution to the realization of Trustworthy AI. Then, we analyze the explanation techniques applicable at different stages of machine learning and propose several recommendations in chronological order to develop GDPR-compliant and Trustworthy XAI systems.
Resumo:
A method using the ring-oven technique for pre-concentration in filter paper discs and near infrared hyperspectral imaging is proposed to identify four detergent and dispersant additives, and to determine their concentration in gasoline. Different approaches were used to select the best image data processing in order to gather the relevant spectral information. This was attained by selecting the pixels of the region of interest (ROI), using a pre-calculated threshold value of the PCA scores arranged as histograms, to select the spectra set; summing up the selected spectra to achieve representativeness; and compensating for the superimposed filter paper spectral information, also supported by scores histograms for each individual sample. The best classification model was achieved using linear discriminant analysis and genetic algorithm (LDA/GA), whose correct classification rate in the external validation set was 92%. Previous classification of the type of additive present in the gasoline is necessary to define the PLS model required for its quantitative determination. Considering that two of the additives studied present high spectral similarity, a PLS regression model was constructed to predict their content in gasoline, while two additional models were used for the remaining additives. The results for the external validation of these regression models showed a mean percentage error of prediction varying from 5 to 15%.
Resumo:
In this work, we discuss the use of multi-way principal component analysis combined with comprehensive two-dimensional gas chromatography to study the volatile metabolites of the saprophytic fungus Memnoniella sp. isolated in vivo by headspace solid-phase microextraction. This fungus has been identified as having the ability to induce plant resistance against pathogens, possibly through its volatile metabolites. Adequate culture media was inoculated, and its headspace was then sampled with a solid-phase microextraction fiber and chromatographed every 24 h over seven days. The raw chromatogram processing using multi-way principal component analysis allowed the determination of the inoculation period, during which the concentration of volatile metabolites was maximized, as well as the discrimination of the appropriate peaks from the complex culture media background. Several volatile metabolites not previously described in the literature on biocontrol fungi were observed, as well as sesquiterpenes and aliphatic alcohols. These results stress that, due to the complexity of multidimensional chromatographic data, multivariate tools might be mandatory even for apparently trivial tasks, such as the determination of the temporal profile of metabolite production and extinction. However, when compared with conventional gas chromatography, the complex data processing yields a considerable improvement in the information obtained from the samples. This article is protected by copyright. All rights reserved.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física