906 resultados para Business planning -- Electronic data processing
Resumo:
Advances in communication, navigation and imaging technologies are expected to fundamentally change methods currently used to collect data. Electronic data interchange strategies will also minimize data handling and automatically update files at the point of capture. This report summarizes the outcome of using a multi-camera platform as a method to collect roadway inventory data. It defines basic system requirements as expressed by users, who applied these techniques and examines how the application of the technology met those needs. A sign inventory case study was used to determine the advantages of creating and maintaining the database and provides the capability to monitor performance criteria for a Safety Management System. The project identified at least 75 percent of the data elements needed for a sign inventory can be gathered by viewing a high resolution image.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Résumé : L’entrainement sportif est « un processus de perfectionnement de l’athlète dirigé selon des principes scientifiques et qui, par des influences planifiées et systématiques (charges) sur la capacité de performance, vise à mener le sportif vers des performances élevées et supérieures dans un sport ou une discipline sportive » (Harre, 1982). Un entrainement sportif approprié devrait commencer dès l’enfance. Ainsi, le jeune sportif pourrait progressivement et systématiquement développer son corps et son esprit afin d’atteindre l’excellence sportive (Bompa, 2000; Weineck, 1997). Or plusieurs entraineurs, dans leur tentative de parvenir à des résultats de haut niveau rapidement, exposent les jeunes athlètes à une formation sportive très spécifique et rigoureuse, sans prendre le temps de développer convenablement les aptitudes physiques et motrices et les habiletés motrices fondamentales sous-jacentes aux habiletés sportives spécifiques (Bompa, 2000), d’où l’appellation « spécialisation hâtive ». Afin de contrer les conséquences néfastes de la spécialisation hâtive, de nouvelles approches d’entrainement ont été proposées. Une des façons d’y arriver consisterait notamment à pratiquer différents sports en bas âge (Fraser-Thomas, Côté et Deakin, 2008; Gould et Carson, 2004; Judge et Gilreath, 2009; LeBlanc et Dickson, 1997; Mostafavifar, Best et Myer, 2013), d’où l’appellation « diversification sportive ». Plusieurs organisations sportives et professionnelles ont décidé de valoriser et de mettre en place des programmes basés sur la diversification sportive (Kaleth et Mikesky, 2010). C’est donc à la suite d’une prise de conscience des effets néfastes de la spécialisation hâtive que des professionnels de l’activité physique d’une école secondaire du Québec (éducateur physique, kinésiologue et agent de développement sportif) ont mis en place un programme multisports-études novateur au premier cycle du secondaire, inspiré des sciences du sport et des lignes directrices du modèle de développement à long terme de l’athlète (DLTA) (Balyi, Cardinal, Higgs, Norris et Way, 2005). Le présent projet de recherche porte sur le développement des aptitudes physiques et motrices chez de jeunes sportifs inscrits à un programme de spécialisation sportive et de jeunes sportifs inscrits à un programme de diversification sportive à l’étape « S’entrainer à s’entrainer » (12 à 16 ans) du modèle de développement à long terme de l’athlète (Balyi et al., 2005). L’objectif principal de cette étude est de rendre compte de l’évolution des aptitudes physiques et motrices de jeunes élèves-athlètes inscrits, d’une part, à un programme sport-études soccer (spécialisation) et, d’autre part, à un programme multisports-études (diversification). Plus spécifiquement, cette étude tente de (a) dresser un portrait détaillé de l’évolution des aptitudes physiques et motrices des élèves-athlètes de chaque programme et de faire un parallèle avec la planification annuelle de chaque programme sportif et (b) de rendre compte des différences d’aptitudes physiques et motrices observées entre les deux programmes. Le projet de recherche a été réalisé dans une école secondaire de la province de Québec. Au total, 53 élèves-athlètes de première secondaire ont été retenus pour le projet de recherche selon leur volonté de participer à l’étude, soit 23 élèves-athlètes de première secondaire inscrits au programme sport-études soccer et 30 élèves-athlètes de première secondaire inscrits au programme multisports-études. Les élèves-athlètes étaient tous âgés de 11 à 13 ans. Treize épreuves standardisées d’aptitudes physiques et motrices ont été administrées aux élèves-athlètes des deux programmes sportifs en début, en milieu et en fin d’année scolaire. Le traitement des données s’est effectué à l’aide de statistiques descriptives et d’une analyse de variance à mesures répétées. Les résultats révèlent que (a) l’ensemble des aptitudes physiques et motrices des élèves-athlètes des deux programmes sportifs se sont améliorées au cours de l’année scolaire, (b) il est relativement facile de faire un parallèle entre l’évolution des aptitudes physiques et motrices des élèves-athlètes et la planification annuelle de chaque programme sportif, (c) les élèves-athlètes du programme multisports-études ont, en général, des performances semblables à celles des élèves-athlètes du programme sport-études soccer et (d) les élèves-athlètes du programme sport-études soccer ont, au cours de l’année scolaire, amélioré davantage leur endurance cardiorespiratoire, alors que ceux du programme multisports-études ont amélioré davantage (a) leur vitesse segmentaire des bras, (b) leur agilité à l’épreuve de course en cercle et (c) leur puissance musculaire des membres inférieurs, confirmant ainsi que les aptitudes physiques et motrices développées chez de jeunes athlètes qui se spécialisent tôt sont plutôt spécifiques au sport pratiqué (Balyi et al., 2005; Bompa, 1999; Cloes, Delfosse, Ledent et Piéron, 1994; Mattson et Richards, 2010), alors que celles développées à travers la diversification sportive sont davantage diversifiées (Coakley, 2010; Gould et Carson, 2004; White et Oatman, 2009). Ces résultats peuvent s’expliquer par (a) la spécificité ou la diversité des tâches proposées durant les séances d’entrainement, (b) le temps consacré à chacune de ces tâches et (c) les exigences reliées à la pratique du soccer comparativement aux exigences reliées à la pratique de plusieurs disciplines sportives. Toutefois, les résultats obtenus restent complexes à interpréter en raison de différents biais : (a) la maturation physique, (b) le nombre d’heures d’entrainement effectué au cours de l’année scolaire précédente, (c) le nombre d’heures d’entrainement offert par les deux programmes sportifs à l’étude et (d) les activités physiques et sportives pratiquées à l’extérieur de l’école. De plus, cette étude ne permet pas d’évaluer la qualité des interventions et des exercices proposés lors des entrainements ni la motivation des élèves-athlètes à prendre part aux séances d’entrainement ou aux épreuves physiques et motrices. Finalement, il serait intéressant de reprendre la présente étude auprès de disciplines sportives différentes et de mettre en évidence les contributions particulières de chaque discipline sportive sur le développement des aptitudes physiques et motrices de jeunes athlètes.
Resumo:
Recent advances in the massively parallel computational abilities of graphical processing units (GPUs) have increased their use for general purpose computation, as companies look to take advantage of big data processing techniques. This has given rise to the potential for malicious software targeting GPUs, which is of interest to forensic investigators examining the operation of software. The ability to carry out reverse-engineering of software is of great importance within the security and forensics elds, particularly when investigating malicious software or carrying out forensic analysis following a successful security breach. Due to the complexity of the Nvidia CUDA (Compute Uni ed Device Architecture) framework, it is not clear how best to approach the reverse engineering of a piece of CUDA software. We carry out a review of the di erent binary output formats which may be encountered from the CUDA compiler, and their implications on reverse engineering. We then demonstrate the process of carrying out disassembly of an example CUDA application, to establish the various techniques available to forensic investigators carrying out black-box disassembly and reverse engineering of CUDA binaries. We show that the Nvidia compiler, using default settings, leaks useful information. Finally, we demonstrate techniques to better protect intellectual property in CUDA algorithm implementations from reverse engineering.
Resumo:
The purpose of this Master’s Thesis was to study the suitability of transportation of liquid wastes to the portfolio of the case company. After the preliminary study the waste types were narrowed down to waste oil and oily waste from ports. The thesis was executed by generating a business plan. The qualitative research of this Master’s Thesis was executed as a case study by collecting information from multiple sources. The business plan was carried out by first familiarizing oneself with literature related to business planning which was then used as a base for the interview of the customer and interviews of the personnel of the case company. Additionally, internet sources and informal conversational interviews with the personnel of the case company were used and these interviews took place during the preliminary study and this thesis. The results of this thesis describe the requirements for the case company that must be met to be able to start operations. Import of waste oil fits perfectly to the portfolio of the case company and it doesn’t require any big investments. Success of the import of waste oil is affected by price of crude oil, exchange rate of ruble and legislation among others. Transportation of oily waste from ports, in turn, is not a core competence of the case company so more actions are required to start operating such as subcontracting with a waste management company.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2015.
Resumo:
For a long time, electronic data analysis has been associated with quantitative methods. However, Computer Assisted Qualitative Data Analysis Software (CAQDAS) are increasingly being developed. Although the CAQDAS has been there for decades, very few qualitative health researchers report using it. This may be due to the difficulties that one has to go through to master the software and the misconceptions that are associated with using CAQDAS. While the issue of mastering CAQDAS has received ample attention, little has been done to address the misconceptions associated with CAQDAS. In this paper, the author reflects on his experience of interacting with one of the popular CAQDAS (NVivo) in order to provide evidence-based implications of using the software. The key message is that unlike statistical software, the main function of CAQDAS is not to analyse data but rather to aid the analysis process, which the researcher must always remain in control of. In other words, researchers must equally know that no software can analyse qualitative data. CAQDAS are basically data management packages, which support the researcher during analysis.
Resumo:
The only method used to date to measure dissolved nitrate concentration (NITRATE) with sensors mounted on profiling floats is based on the absorption of light at ultraviolet wavelengths by nitrate ion (Johnson and Coletti, 2002; Johnson et al., 2010; 2013; D’Ortenzio et al., 2012). Nitrate has a modest UV absorption band with a peak near 210 nm, which overlaps with the stronger absorption band of bromide, which has a peak near 200 nm. In addition, there is a much weaker absorption due to dissolved organic matter and light scattering by particles (Ogura and Hanya, 1966). The UV spectrum thus consists of three components, bromide, nitrate and a background due to organics and particles. The background also includes thermal effects on the instrument and slow drift. All of these latter effects (organics, particles, thermal effects and drift) tend to be smooth spectra that combine to form an absorption spectrum that is linear in wavelength over relatively short wavelength spans. If the light absorption spectrum is measured in the wavelength range around 217 to 240 nm (the exact range is a bit of a decision by the operator), then the nitrate concentration can be determined. Two different instruments based on the same optical principles are in use for this purpose. The In Situ Ultraviolet Spectrophotometer (ISUS) built at MBARI or at Satlantic has been mounted inside the pressure hull of a Teledyne/Webb Research APEX and NKE Provor profiling floats and the optics penetrate through the upper end cap into the water. The Satlantic Submersible Ultraviolet Nitrate Analyzer (SUNA) is placed on the outside of APEX, Provor, and Navis profiling floats in its own pressure housing and is connected to the float through an underwater cable that provides power and communications. Power, communications between the float controller and the sensor, and data processing requirements are essentially the same for both ISUS and SUNA. There are several possible algorithms that can be used for the deconvolution of nitrate concentration from the observed UV absorption spectrum (Johnson and Coletti, 2002; Arai et al., 2008; Sakamoto et al., 2009; Zielinski et al., 2011). In addition, the default algorithm that is available in Satlantic sensors is a proprietary approach, but this is not generally used on profiling floats. There are some tradeoffs in every approach. To date almost all nitrate sensors on profiling floats have used the Temperature Compensated Salinity Subtracted (TCSS) algorithm developed by Sakamoto et al. (2009), and this document focuses on that method. It is likely that there will be further algorithm development and it is necessary that the data systems clearly identify the algorithm that is used. It is also desirable that the data system allow for recalculation of prior data sets using new algorithms. To accomplish this, the float must report not just the computed nitrate, but the observed light intensity. Then, the rule to obtain only one NITRATE parameter is, if the spectrum is present then, the NITRATE should be recalculated from the spectrum while the computation of nitrate concentration can also generate useful diagnostics of data quality.
Resumo:
Part 13: Virtual Reality and Simulation
Resumo:
The CATARINA Leg1 cruise was carried out from June 22 to July 24 2012 on board the B/O Sarmiento de Gamboa, under the scientific supervision of Aida Rios (CSIC-IIM). It included the occurrence of the OVIDE hydrological section that was performed in June 2002, 2004, 2006, 2008 and 2010, as part of the CLIVAR program (name A25) ), and under the supervision of Herlé Mercier (CNRSLPO). This section begins near Lisbon (Portugal), runs through the West European Basin and the Iceland Basin, crosses the Reykjanes Ridge (300 miles north of Charlie-Gibbs Fracture Zone, and ends at Cape Hoppe (southeast tip of Greenland). The objective of this repeated hydrological section is to monitor the variability of water mass properties and main current transports in the basin, complementing the international observation array relevant for climate studies. In addition, the Labrador Sea was partly sampled (stations 101-108) between Greenland and Newfoundland, but heavy weather conditions prevented the achievement of the section south of 53°40’N. The quality of CTD data is essential to reach the first objective of the CATARINA project, i.e. to quantify the Meridional Overturning Circulation and water mass ventilation changes and their effect on the changes in the anthropogenic carbon ocean uptake and storage capacity. The CATARINA project was mainly funded by the Spanish Ministry of Sciences and Innovation and co-funded by the Fondo Europeo de Desarrollo Regional. The hydrological OVIDE section includes 95 surface-bottom stations from coast to coast, collecting profiles of temperature, salinity, oxygen and currents, spaced by 2 to 25 Nm depending on the steepness of the topography. The position of the stations closely follows that of OVIDE 2002. In addition, 8 stations were carried out in the Labrador Sea. From the 24 bottles closed at various depth at each stations, samples of sea water are used for salinity and oxygen calibration, and for measurements of biogeochemical components that are not reported here. The data were acquired with a Seabird CTD (SBE911+) and an SBE43 for the dissolved oxygen, belonging to the Spanish UTM group. The software SBE data processing was used after decoding and cleaning the raw data. Then, the LPO matlab toolbox was used to calibrate and bin the data as it was done for the previous OVIDE cruises, using on the one hand pre and post-cruise calibration results for the pressure and temperature sensors (done at Ifremer) and on the other hand the water samples of the 24 bottles of the rosette at each station for the salinity and dissolved oxygen data. A final accuracy of 0.002°C, 0.002 psu and 0.04 ml/l (2.3 umol/kg) was obtained on final profiles of temperature, salinity and dissolved oxygen, compatible with international requirements issued from the WOCE program.
Resumo:
The majority of the organizations store their historical business information in data warehouses which are queried to make strategic decisions by using online analytical processing (OLAP) tools. This information has to be correctly assured against unauthorized accesses, but nevertheless there are a great amount of legacy OLAP applications that have been developed without considering security aspects or these have been incorporated once the system was implemented. This work defines a reverse engineering process that allows us to obtain the conceptual model corresponding to a legacy OLAP application, and also analyses and represents the security aspects that could have established. This process has been aligned with a model-driven architecture for developing secure OLAP applications by defining the transformations needed to automatically apply it. Once the conceptual model has been extracted, it can be easily modified and improved with security, and automatically transformed to generate the new implementation.
Resumo:
A purpose of this research study was to demonstrate the practical linguistic study and evaluation of dissertations by using two examples of the latest technology, the microcomputer and optical scanner. That involved developing efficient methods for data entry plus creating computer algorithms appropriate for personal, linguistic studies. The goal was to develop a prototype investigation which demonstrated practical solutions for maximizing the linguistic potential of the dissertation data base. The mode of text entry was from a Dest PC Scan 1000 Optical Scanner. The function of the optical scanner was to copy the complete stack of educational dissertations from the Florida Atlantic University Library into an I.B.M. XT microcomputer. The optical scanner demonstrated its practical value by copying 15,900 pages of dissertation text directly into the microcomputer. A total of 199 dissertations or 72% of the entire stack of education dissertations (277) were successfully copied into the microcomputer's word processor where each dissertation was analyzed for a variety of syntax frequencies. The results of the study demonstrated the practical use of the optical scanner for data entry, the microcomputer for data and statistical analysis, and the availability of the college library as a natural setting for text studies. A supplemental benefit was the establishment of a computerized dissertation corpus which could be used for future research and study. The final step was to build a linguistic model of the differences in dissertation writing styles by creating 7 factors from 55 dependent variables through principal components factor analysis. The 7 factors (textual components) were then named and described on a hypothetical construct defined as a continuum from a conversational, interactional style to a formal, academic writing style. The 7 factors were then grouped through discriminant analysis to create discriminant functions for each of the 7 independent variables. The results indicated that a conversational, interactional writing style was associated with more recent dissertations (1972-1987), an increase in author's age, females, and the department of Curriculum and Instruction. A formal, academic writing style was associated with older dissertations (1972-1987), younger authors, males, and the department of Administration and Supervision. It was concluded that there were no significant differences in writing style due to subject matter (community college studies) compared to other subject matter. It was also concluded that there were no significant differences in writing style due to the location of dissertation origin (Florida Atlantic University, University of Central Florida, Florida International University).
Resumo:
Due to their intriguing dielectric, pyroelectric, elasto-electric, or opto-electric properties, oxide ferroelectrics are vital candidates for the fabrication of most electronics. However, these extraordinary properties exist mainly in the temperature regime around the ferroelectric phase transition, which is usually several hundreds of K away from room temperature. Therefore, the manipulation of oxide ferroelectrics, especially moving the ferroelectric transition towards room temperature, is of great interest for application and also basic research. In this thesis, we demonstrate this using examples of NaNbO3 films. We show that the transition temperature of these films can be modified via plastic strain caused by epitaxial film growth on a structurally mismatched substrate, and this strain can be fixed by controlling the stoichiometry. The structural and electronic properties of Na1+xNbO3+δ thin films are carefully examined by among others XRD (e.g. RSM) and TEM and cryoelectronic measurements. Especially the electronic features are carefully analyzed via specially developed interdigitated electrodes in combination with integrated temperature sensor and heater. The electronic data are interpreted using existing as well as novel theories and models, they are proved to be closely correlated to the structural characteristics. The major results are: -Na1+xNbO3+δ thin films can be grown epitaxially on (110)NdGaO3 with a thickness up to 140 nm (thicker films have not been studied). Plastic relaxation of the compressive strain sets in when the thickness of the film exceeds approximately 10 – 15 nm. Films with excess Na are mainly composed of NaNbO3 with minor contribution of Na3NbO4. The latter phase seems to form nanoprecipitates that are homogeneously distributed in the NaNbO3 film which helps to stabilize the film and reduce the relaxation of the strain. -For the nominally stoichiometric films, the compressive strain leads to a broad and frequency-dispersive phase transition at lower temperature (125 – 147 K). This could be either a new transition or a shift in temperature of a known transition. Considering the broadness and frequency dispersion of the transition, this is actually a transition from the dielectric state at high temperature to a relaxor-type ferroelectric state at low temperature. The latter is based on the formation of polar nano-regions (PNRs). Using the electric field dependence of the freezing temperature, allows a direct estimation of the volume (70 to 270 nm3) and diameter (5.2 to 8 nm, spherical approximation) of the PNRs. The values confirm with literature values which were measured by other technologies. -In case of the off-stoichiometric samples, we observe again the classical ferroelectric behavior. However, the thermally hysteretic phase transition which is observed around 620 – 660 K for unstrained material is shifted to room temperature due to the compressive strain. Beside to the temperature shift, the temperature dependence of the permittivity is nearly identical for strained and unstrained materials. -The last but not least, in all cases, a significant anisotropy in the electronic and structural properties is observed which arises automatically from the anisotropic strain caused by the orthorhombic structure of the substrate. However, this anisotropy cannot be explained by the classical model which tries to fit an orthorhombic film onto an orthorhombic substrate. A novel “square lattice” model in which the films adapt a “square” shaped lattice in the plane of the film during the epitaxial growth at elevated temperature (~1000 K) nicely explains the experimental results. In this thesis we sketch a way to manipulate the ferroelectricity of NaNbO3 films via strain and stoichiometry. The results indicate that compressive strain which is generated by the epitaxial growth of the film on mismatched substrate is able to reduce the ferroelectric transition temperature or induce a phase transition at low temperature. Moreover, by adding Na in the NaNbO3 film a secondary phase Na3NbO4 is formed which seems to stabilize the main phase NaNbO3 and the strain and, thus, is able to engineer the ferroelectric behavior from the expected classical ferroelectric for perfect stoichiometry to relaxor-type ferroelectric for slightly off-stoichiometry, back to classical ferroelectric for larger off-stoichiometry. Both strain and stoichiometry are proven as perfect methods to optimize the ferroelectric properties of oxide films.
Resumo:
The aim of this novel experimental study is to investigate the behaviour of a 2m x 2m model of a masonry groin vault, which is built by the assembly of blocks made of a 3D-printed plastic skin filled with mortar. The choice of the groin vault is due to the large presence of this vulnerable roofing system in the historical heritage. Experimental tests on the shaking table are carried out to explore the vault response on two support boundary conditions, involving four lateral confinement modes. The data processing of markers displacement has allowed to examine the collapse mechanisms of the vault, based on the arches deformed shapes. There then follows a numerical evaluation, to provide the orders of magnitude of the displacements associated to the previous mechanisms. Given that these displacements are related to the arches shortening and elongation, the last objective is the definition of a critical elongation between two diagonal bricks and consequently of a diagonal portion. This study aims to continue the previous work and to take another step forward in the research of ground motion effects on masonry structures.