904 resultados para encoding of measurement streams
Resumo:
Recent years have seen an astronomical rise in SQL Injection Attacks (SQLIAs) used to compromise the confidentiality, authentication and integrity of organisations’ databases. Intruders becoming smarter in obfuscating web requests to evade detection combined with increasing volumes of web traffic from the Internet of Things (IoT), cloud-hosted and on-premise business applications have made it evident that the existing approaches of mostly static signature lack the ability to cope with novel signatures. A SQLIA detection and prevention solution can be achieved through exploring an alternative bio-inspired supervised learning approach that uses input of labelled dataset of numerical attributes in classifying true positives and negatives. We present in this paper a Numerical Encoding to Tame SQLIA (NETSQLIA) that implements a proof of concept for scalable numerical encoding of features to a dataset attributes with labelled class obtained from deep web traffic analysis. In the numerical attributes encoding: the model leverages proxy in the interception and decryption of web traffic. The intercepted web requests are then assembled for front-end SQL parsing and pattern matching by applying traditional Non-Deterministic Finite Automaton (NFA). This paper is intended for a technique of numerical attributes extraction of any size primed as an input dataset to an Artificial Neural Network (ANN) and statistical Machine Learning (ML) algorithms implemented using Two-Class Averaged Perceptron (TCAP) and Two-Class Logistic Regression (TCLR) respectively. This methodology then forms the subject of the empirical evaluation of the suitability of this model in the accurate classification of both legitimate web requests and SQLIA payloads.
Resumo:
The speed with which data has moved from being scarce, expensive and valuable, thus justifying detailed and careful verification and analysis to a situation where the streams of detailed data are almost too large to handle has caused a series of shifts to occur. Legal systems already have severe problems keeping up with, or even in touch with, the rate at which unexpected outcomes flow from information technology. The capacity to harness massive quantities of existing data has driven Big Data applications until recently. Now the data flows in real time are rising swiftly, become more invasive and offer monitoring potential that is eagerly sought by commerce and government alike. The ambiguities as to who own this often quite remarkably intrusive personal data need to be resolved – and rapidly - but are likely to encounter rising resistance from industrial and commercial bodies who see this data flow as ‘theirs’. There have been many changes in ICT that has led to stresses in the resolution of the conflicts between IP exploiters and their customers, but this one is of a different scale due to the wide potential for individual customisation of pricing, identification and the rising commercial value of integrated streams of diverse personal data. A new reconciliation between the parties involved is needed. New business models, and a shift in the current confusions over who owns what data into alignments that are in better accord with the community expectations. After all they are the customers, and the emergence of information monopolies needs to be balanced by appropriate consumer/subject rights. This will be a difficult discussion, but one that is needed to realise the great benefits to all that are clearly available if these issues can be positively resolved. The customers need to make these data flow contestable in some form. These Big data flows are only going to grow and become ever more instructive. A better balance is necessary, For the first time these changes are directly affecting governance of democracies, as the very effective micro targeting tools deployed in recent elections have shown. Yet the data gathered is not available to the subjects. This is not a survivable social model. The Private Data Commons needs our help. Businesses and governments exploit big data without regard for issues of legality, data quality, disparate data meanings, and process quality. This often results in poor decisions, with individuals bearing the greatest risk. The threats harbored by big data extend far beyond the individual, however, and call for new legal structures, business processes, and concepts such as a Private Data Commons. This Web extra is the audio part of a video in which author Marcus Wigan expands on his article "Big Data's Big Unintended Consequences" and discusses how businesses and governments exploit big data without regard for issues of legality, data quality, disparate data meanings, and process quality. This often results in poor decisions, with individuals bearing the greatest risk. The threats harbored by big data extend far beyond the individual, however, and call for new legal structures, business processes, and concepts such as a Private Data Commons.
Resumo:
Passive sampling devices (PS) are widely used for pollutant monitoring in water, but estimation of measurement uncertainties by PS has seldom been undertaken. The aim of this work was to identify key parameters governing PS measurements of metals and their dispersion. We report the results of an in situ intercomparison exercise on diffusive gradient in thin films (DGT) in surface waters. Interlaboratory uncertainties of time-weighted average (TWA) concentrations were satisfactory (from 28% to 112%) given the number of participating laboratories (10) and ultra-trace metal concentrations involved. Data dispersion of TWA concentrations was mainly explained by uncertainties generated during DGT handling and analytical procedure steps. We highlight that DGT handling is critical for metals such as Cd, Cr and Zn, implying that DGT assembly/dismantling should be performed in very clean conditions. Using a unique dataset, we demonstrated that DGT markedly lowered the LOQ in comparison to spot sampling and stressed the need for accurate data calculation.
Resumo:
Electoral researchers are so much accustomed to analyzing the choice of the single most preferred party as the left-hand side variable of their models of electoral behavior that they often ignore revealed preference data. Drawing on random utility theory, their models predict electoral behavior at the extensive margin of choice. Since the seminal work of Luce and others on individual choice behavior, however, many social science disciplines (consumer research, labor market research, travel demand, etc.) have extended their inventory of observed preference data with, for instance, multiple paired comparisons, complete or incomplete rankings, and multiple ratings. Eliciting (voter) preferences using these procedures and applying appropriate choice models is known to considerably increase the efficiency of estimates of causal factors in models of (electoral) behavior. In this paper, we demonstrate the efficiency gain when adding additional preference information to first preferences, up to full ranking data. We do so for multi-party systems of different sizes. We use simulation studies as well as empirical data from the 1972 German election study. Comparing the practical considerations for using ranking and single preference data results in suggestions for choice of measurement instruments in different multi-candidate and multi-party settings.
Resumo:
Doutoramento em Engenharia Florestal e dos Recursos Naturais - Instituto Superior de Agronomia - UL
Resumo:
The Homogeneous Charge Compression Ignition (HCCI) engine is a promising combustion concept for reducing NOx and particulate matter (PM) emissions and providing a high thermal efficiency in internal combustion engines. This concept though has limitations in the areas of combustion control and achieving stable combustion at high loads. For HCCI to be a viable option for on-road vehicles, further understanding of its combustion phenomenon and its control are essential. Thus, this thesis has a focus on both the experimental setup of an HCCI engine at Michigan Technological University (MTU) and also developing a physical numerical simulation model called the Sequential Model for Residual Affected HCCI (SMRH) to investigate performance of HCCI engines. The primary focus is on understanding the effects of intake and exhaust valve timings on HCCI combustion. For the experimental studies, this thesis provided the contributions for development of HCCI setup at MTU. In particular, this thesis made contributions in the areas of measurement of valve profiles, measurement of piston to valve contact clearance for procuring new pistons for further studies of high geometric compression ratio HCCI engines. It also consists of developing and testing a supercharging station and the setup of an electrical air heater to extend the HCCI operating region. The HCCI engine setup is based on a GM 2.0 L LHU Gen 1 engine which is a direct injected engine with variable valve timing (VVT) capabilities. For the simulation studies, a computationally efficient modeling platform has been developed and validated against experimental data from a single cylinder HCCI engine. In-cylinder pressure trace, combustion phasing (CA10, CA50, BD) and performance metrics IMEP, thermal efficiency, and CO emission are found to be in good agreement with experimental data for different operating conditions. Effects of phasing intake and exhaust valves are analyzed using SMRH. In addition, a novel index called Fuel Efficiency and Emissions (FEE) index is defined and is used to determine the optimal valve timings for engine operation through the use of FEE contour maps.
Resumo:
The examination of Workplace Aggression as a global construct conceptualization has gained considerable attention over the past few years as organizations work to better understand and address the occurrence and consequences of this challenging construct. The purpose of this dissertation is to build on previous efforts to validate the appropriateness and usefulness of a global conceptualization of the workplace aggression construct. ^ This dissertation has been broken up into two parts: Part 1 utilized a Confirmatory Factor Analysis approach in order to assess the existence of workplace aggression as a global construct; Part 2 utilized a series of correlational analyses to examine the relationship between a selection of commonly experienced individual strain based outcomes and the global construct conceptualization assessed in Part 1. Participants were a diverse sample of 219 working individuals from Amazon’s Mechanical Turk participant pool. ^ Results of Part 1 did not show support for a one-factor global construct conceptualization of the workplace aggression construct. However, support was shown for a higher-order five-factor model of the construct, suggesting that it may be possible to conceptualize workplace aggression as an overarching construct that is made up of separate workplace aggression constructs. Results of Part 2 showed support for the relationships between an existing global construct workplace aggression conceptualization and a series of strain-based outcomes. Utilizing correlational analyses, additional post-hoc analyses showed that individual factors such as emotional intelligence and personality are related to the experience of workplace aggression. Further, utilizing moderated regression analysis, the results demonstrated that individuals experiencing high levels of workplace aggression reported higher job satisfaction when they felt strongly that the aggressive act was highly visible, and similarly, when they felt that there was a clear intent to cause harm. ^ Overall, the findings of this dissertation do support the need for a simplification of its current state of measurement. Future research should continue to examine workplace aggression in an effort to shed additional light on the structure and usefulness of this complex construct.^
Resumo:
A method of precise characterization of surface nanoscale axial photonics (SNAP) structures with a reference fiber is proposed, analyzed, and demonstrated experimentally. The method is based on simultaneous coupling of a microfiber to a SNAP structure under test and to a reference optical fiber. Significant reduction of measurement errors associated with the environmental temperature variations and technical noise of the spectrum analyzer is demonstrated. The achieved measurement precision of the effective radius variation of the SNAP structure is 0.2 Å.
Resumo:
The U.S. National Science Foundation metadata registry under development for the National Science Digital Library (NSDL) is a repertory intended to manage both metadata schemes and schemas. The focus of this draft discussion paper is on the scheme side of the development work. In particular, the concern of the discussion paper is with issues around the creation of historical snapshots of concept changes and their encoding in SKOS. Through framing the problem as we see it, we hope to find an optimal solution to our need for a SKOS encoding of these snapshots. Since what we are seeking to model is concept change, it is necessary at the outset to make it clear that we are not talking about changes to a concept of such a nature that would require the declaration a new concept with its own URI.In the project, we avoid the use of the terms “version” and “versioning” with regard to changes in concepts and reserve their use to the significant changes of schemes as a whole. Significant changes triggering a new scheme version might include changes in scheme documentation that express a significant shift in the purpose, use or architecture of the scheme. We use the term “snapshot” to denote the state of a scheme at identifiable points in time. Thus, snapshots are identifiable views of a scheme that record the incremental changes that have occurred to concepts, relationships among concepts, and scheme documentation since the last snapshot. Aspects of concept change occur that we need to capture and make available both through the registry and through potentially in transmission of a scheme to other registries. We call these capturings “concept instances.”
Resumo:
The superior parietal lobule (SPL) of macaques is classically described as an associative cortex implicated in visuospatial perception, planning and control of reaching and grasping movements (De Vitis et al., 2019; Galletti et al., 2003, 2018, 2022; Fattori et al., 2017; Hadjidimitrakis et al., 2015). These processes are the result of the integration of signals related to different sensory modalities. During a goal-directed action, eye and limb information are combined to ensure that the hand is transported at the gazed target location and the arm is maintained steady in the final position. The SPL areas V6A, PEc and PE contain cells sensitive to the direction of gaze and limb position but less is known about the degree of independent encoding of these signals. In this thesis, we evaluated the influence of eye and arm position information upon single neuron activity of areas V6A, PEc and PE during the holding period after the execution of arm reaching movement, when the gaze and hand are both still at the reach target. Two male macaques (Macaca fascicularis) performed a reaching task while single unit activity was recorded from areas V6A, PEc and PE. We found that neurons in all these areas were modulated by eye and static arm positions with a joint encoding of gaze and somatosensory signals in V6A and PEc and a mostly separate processing of the two signals in PE. The elaboration of this information reflects the functional gradient found in the SPL with the caudal sector characterized by visuo-somatic properties in comparison to the rostral sector dominated by somatosensory signals. This evidence well agree also with the recent reallocation of areas V6A and PEc in Brodmann’s area 7 depending on their similar structural and functional features with respect to PE belonging to Brodmann’s area 5 (Gamberini et al., 2020).
Resumo:
A method to quantify lycopene and β-carotene in freeze dried tomato pulp by high performance liquid chromatography (HLPC) was validated according to the criteria of selectivity, sensitivity, precision and accuracy, and uncertainty estimation of measurement was determined with data obtained in the validation. The validated method presented is selective in terms of analysis, and it had a good precision and accuracy. Detection limit for lycopene and β-carotene was 4.2 and 0.23 mg 100 g-1, respectively. The estimation of expanded uncertainty (K = 2) for lycopene was 104 ± 21 mg 100 g-1 and for β-carotene was 6.4 ± 1.5 mg 100 g-1.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
O panorama de abordagens da psicologia do trabalho e das organizações (PTO) no Brasil mostra grande diversidade teórica e metodológica que reflete a presença de distintos paradigmas científicos na delimitação e organização desse campo. O objetivo deste artigo é analisar esses paradigmas e relacioná-los com três eixos temáticos da PTO no Brasil: o do comportamento, o da subjetividade e o clínico. Investiga-se os fundamentos epistemológicos e metodológicos, alguns trabalhos e autores de cada um desses eixos, discutindo sua contribuição para o campo da PTO no Brasil. O artigo desenvolve uma discussão sobre as tensões existentes entre esses eixos, derivadas da pressão de corresponderem, ao mesmo tempo, a critérios de rigor acadêmico e relevância organizacional. O artigo, por fim, mostra a diversificação do campo da PTO no Brasil e os desafios disso decorrentes.
Resumo:
Human activities that modify land cover can alter the structure and biogeochemistry of small streams but these effects are poorly known over large regions of the humid tropics where rates of forest clearing are high. We examined how conversion of Amazon lowland tropical forest to cattle pasture influenced the physical and chemical structure, organic matter stocks and N cycling of small streams. We combined a regional ground survey of small streams with an intensive study of nutrient cycling using (15)N additions in three representative streams: a second-order forest stream, a second-order pasture stream and a third-order pasture stream. These three streams were within several km of each other and on similar soils. Replacement of forest with pasture decreased stream habitat complexity by changing streams from run and pool channels with forest leaf detritus (50% cover) to grass-filled (63% cover) channel with runs of slow-moving water. In the survey, pasture streams consistently had lower concentrations of dissolved oxygen and nitrate (NO(3) (-)) compared with similar-sized forest streams. Stable isotope additions revealed that second-order pasture stream had a shorter NH(4) (+) uptake length, higher uptake rates into organic matter components and a shorter (15)NH(4) (+) residence time than the second-order forest stream or the third-order pasture stream. Nitrification was significant in the forest stream (19% of the added (15)NH(4) (+)) but not in the second-order pasture (0%) or third-order (6%) pasture stream. The forest stream retained 7% of added (15)N in organic matter compartments and exported 53% ((15)NH(4) (+) = 34%; (15)NO(3) (-) = 19%). In contrast, the second-order pasture stream retained 75% of added (15)N, predominantly in grasses (69%) and exported only 4% as (15)NH(4) (+). The fate of tracer (15)N in the third-order pasture stream more closely resembled that in the forest stream, with 5% of added N retained and 26% exported ((15)NH(4) (+) = 9%; (15)NO(3) (-) = 6%). These findings indicate that the widespread infilling by grass in small streams in areas deforested for pasture greatly increases the retention of inorganic N in the first- and second-order streams, which make up roughly three-fourths of total stream channel length in Amazon basin watersheds. The importance of this phenomenon and its effect on N transport to larger rivers across the larger areas of the Amazon Basin will depend on better evaluation of both the extent and the scale at which stream infilling by grass occurs, but our analysis suggests the phenomenon is widespread.
Resumo:
The behavior of the Steinmetz coefficient has been described for several different materials: steels with 3.2% Si and 6.5% Si, MnZn ferrite and Ni-Fe alloys. It is shown that, for steels, the Steinmetz law achieves R(2)> 0.999 only between 0.3 and 1.2 T, which is the interval where domain wall movement dominates. The anisotropy of Steinmetz coefficient for non-oriented (NO) steel is also discussed. It is shown that for a NO 3.2% Si steel with a strong Goss component in texture, the power law coefficient and remanence decreases monotonically with the direction of measurement going from rolling direction (RD) to transverse direction (TD), although coercive field increased. The remanence behavior can be related to the minimization of demagnetizing field at the surface grains. The data appear to indicate that the Steinmetz coefficient increases as magnetocrystalline anisotropy constant decreases. (c) 2008 Elsevier B.V. All rights reserved.