951 resultados para measurement data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Satellite remote sensing is being effectively used in monitoring the ocean surface and its overlying atmosphere. Technical growth in the field of satellite sensors has made satellite measurement an inevitable part of oceanographic and atmospheric research. Among the ocean observing sensors, ocean colour sensors make use of visible band of electromagnetic spectrum (shorter wavelength). The use of shorter wavelength ensures fine spatial resolution of these parameters to depict oceanographic and atmospheric characteristics of any region having significant spaio-temporal variability. Off the southwest coast of India is such an area showing very significant spatio-temporal oceanographic and atmospheric variability due to the seasonally reversing surface winds and currents. Consequently, the region is enriched with features like upwelling, sinking, eddies, fronts, etc. Among them, upwelling brings nutrient-rich waters from subsurface layers to surface layers. During this process primary production enhances, which is measured in ocean colour sensors as high values of Chl a. Vertical attenuation depth of incident solar radiation (Kd) and Aerosol Optical Depth (AOD) are another two parameters provided by ocean colour sensors. Kd is also susceptible to undergo significant seasonal variability due to the changes in the content of Chl a in the water column. Moreover, Kd is affected by sediment transport in the upper layers as the region experiences land drainage resulting from copious rainfall. The wide range of variability of wind speed and direction may also influence the aerosol source / transport and consequently AOD. The present doctoral thesis concentrates on the utility of Chl a, Kd and AODprovided by satellite ocean colour sensors to understand oceanographic and atmospheric variability off the southwest coast of India. The thesis is divided into six Chapters with further subdivisions

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main instrument used in psychological measurement is the self-report questionnaire. One of its major drawbacks however is its susceptibility to response biases. A known strategy to control these biases has been the use of so-called ipsative items. Ipsative items are items that require the respondent to make between-scale comparisons within each item. The selected option determines to which scale the weight of the answer is attributed. Consequently in questionnaires only consisting of ipsative items every respondent is allotted an equal amount, i.e. the total score, that each can distribute differently over the scales. Therefore this type of response format yields data that can be considered compositional from its inception. Methodological oriented psychologists have heavily criticized this type of item format, since the resulting data is also marked by the associated unfavourable statistical properties. Nevertheless, clinicians have kept using these questionnaires to their satisfaction. This investigation therefore aims to evaluate both positions and addresses the similarities and differences between the two data collection methods. The ultimate objective is to formulate a guideline when to use which type of item format. The comparison is based on data obtained with both an ipsative and normative version of three psychological questionnaires, which were administered to 502 first-year students in psychology according to a balanced within-subjects design. Previous research only compared the direct ipsative scale scores with the derived ipsative scale scores. The use of compositional data analysis techniques also enables one to compare derived normative score ratios with direct normative score ratios. The addition of the second comparison not only offers the advantage of a better-balanced research strategy. In principle it also allows for parametric testing in the evaluation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our essay aims at studying suitable statistical methods for the clustering of compositional data in situations where observations are constituted by trajectories of compositional data, that is, by sequences of composition measurements along a domain. Observed trajectories are known as “functional data” and several methods have been proposed for their analysis. In particular, methods for clustering functional data, known as Functional Cluster Analysis (FCA), have been applied by practitioners and scientists in many fields. To our knowledge, FCA techniques have not been extended to cope with the problem of clustering compositional data trajectories. In order to extend FCA techniques to the analysis of compositional data, FCA clustering techniques have to be adapted by using a suitable compositional algebra. The present work centres on the following question: given a sample of compositional data trajectories, how can we formulate a segmentation procedure giving homogeneous classes? To address this problem we follow the steps described below. First of all we adapt the well-known spline smoothing techniques in order to cope with the smoothing of compositional data trajectories. In fact, an observed curve can be thought of as the sum of a smooth part plus some noise due to measurement errors. Spline smoothing techniques are used to isolate the smooth part of the trajectory: clustering algorithms are then applied to these smooth curves. The second step consists in building suitable metrics for measuring the dissimilarity between trajectories: we propose a metric that accounts for difference in both shape and level, and a metric accounting for differences in shape only. A simulation study is performed in order to evaluate the proposed methodologies, using both hierarchical and partitional clustering algorithm. The quality of the obtained results is assessed by means of several indices

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we examine the problem of compositional data from a different starting point. Chemical compositional data, as used in provenance studies on archaeological materials, will be approached from the measurement theory. The results will show, in a very intuitive way that chemical data can only be treated by using the approach developed for compositional data. It will be shown that compositional data analysis is a particular case in projective geometry, when the projective coordinates are in the positive orthant, and they have the properties of logarithmic interval metrics. Moreover, it will be shown that this approach can be extended to a very large number of applications, including shape analysis. This will be exemplified with a case study in architecture of Early Christian churches dated back to the 5th-7th centuries AD

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our goal in this paper is to assess reliability and validity of egocentered network data using multilevel analysis (Muthen, 1989, Hox, 1993) under the multitrait-multimethod approach. The confirmatory factor analysis model for multitrait-multimethod data (Werts & Linn, 1970; Andrews, 1984) is used for our analyses. In this study we reanalyse a part of data of another study (Kogovšek et al., 2002) done on a representative sample of the inhabitants of Ljubljana. The traits used in our article are the name interpreters. We consider egocentered network data as hierarchical; therefore a multilevel analysis is required. We use Muthen's partial maximum likelihood approach, called pseudobalanced solution (Muthen, 1989, 1990, 1994) which produces estimations close to maximum likelihood for large ego sample sizes (Hox & Mass, 2001). Several analyses will be done in order to compare this multilevel analysis to classic methods of analysis such as the ones made in Kogovšek et al. (2002), who analysed the data only at group (ego) level considering averages of all alters within the ego. We show that some of the results obtained by classic methods are biased and that multilevel analysis provides more detailed information that much enriches the interpretation of reliability and validity of hierarchical data. Within and between-ego reliabilities and validities and other related quality measures are defined, computed and interpreted

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La medición de la desigualdad de oportunidades con las bases de PISA implican varias limitaciones: (i) la muestra sólo representa una fracción limitada de las cohortes de jóvenes de 15 años en los países en desarrollo y (ii) estas fracciones no son uniformes entre países ni entre periodos. Lo anterior genera dudas sobre la confiabilidad de estas mediciones cuando se usan para comparaciones internacionales: mayor equidad puede ser resultado de una muestra más restringida y más homogénea. A diferencia de enfoques previos basados en reconstrucción de las muestras, el enfoque del documento consiste en proveer un índice bidimensional que incluye logro y acceso como dimensiones del índice. Se utilizan varios métodos de agregación y se observan cambios considerables en los rankings de (in) equidad de oportunidades cuando solo se observa el logro y cuando se observan ambas dimensiones en las pruebas de PISA 2006/2009. Finalmente se propone una generalización del enfoque permitiendo otras dimensiones adicionales y otros pesos utilizados en la agregación.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This is mainly a discussant paper on measurement criteria upon sector’s investment and capitalservices and the way these composition and measurement issues come to have an impact ongrowth figures for some major sectors of the Colombian economy. The main focus is on distinctionmatters regarding the measurement of capital stock and capital services in the productionprocess. The availability of appropriate data, widely discussed throughout the document, impliesthat major affirmations are more hypothetic than indicative or descriptive in style. Moststatements are established as a motivation device for studies on sector’s activities with a focus onconsistency with aggregate figures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compositional data, also called multiplicative ipsative data, are common in survey research instruments in areas such as time use, budget expenditure and social networks. Compositional data are usually expressed as proportions of a total, whose sum can only be 1. Owing to their constrained nature, statistical analysis in general, and estimation of measurement quality with a confirmatory factor analysis model for multitrait-multimethod (MTMM) designs in particular are challenging tasks. Compositional data are highly non-normal, as they range within the 0-1 interval. One component can only increase if some other(s) decrease, which results in spurious negative correlations among components which cannot be accounted for by the MTMM model parameters. In this article we show how researchers can use the correlated uniqueness model for MTMM designs in order to evaluate measurement quality of compositional indicators. We suggest using the additive log ratio transformation of the data, discuss several approaches to deal with zero components and explain how the interpretation of MTMM designs di ers from the application to standard unconstrained data. We show an illustration of the method on data of social network composition expressed in percentages of partner, family, friends and other members in which we conclude that the faceto-face collection mode is generally superior to the telephone mode, although primacy e ects are higher in the face-to-face mode. Compositions of strong ties (such as partner) are measured with higher quality than those of weaker ties (such as other network members)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En les últimes dècades, l'increment dels nivells de radiació solar ultraviolada (UVR) que arriba a la Terra (principalment degut a la disminució d'ozó estratosfèric) juntament amb l'augment detectat en malalties relacionades amb l'exposició a la UVR, ha portat a un gran volum d'investigacions sobre la radiació solar en aquesta banda i els seus efectes en els humans. L'índex ultraviolat (UVI), que ha estat adoptat internacionalment, va ser definit amb el propòsit d'informar al públic general sobre els riscos d'exposar el cos nu a la UVR i per tal d'enviar missatges preventius. L'UVI es va definir inicialment com el valor màxim diari. No obstant, el seu ús actual s'ha ampliat i té sentit referir-se a un valor instantani o a una evolució diària del valor d'UVI mesurat, modelitzat o predit. El valor concret d'UVI està afectat per la geometria Sol-Terra, els núvols, l'ozó, els aerosols, l'altitud i l'albedo superficial. Les mesures d'UVI d'alta qualitat són essencials com a referència i per estudiar tendències a llarg termini; es necessiten també tècniques acurades de modelització per tal d'entendre els factors que afecten la UVR, per predir l'UVI i com a control de qualitat de les mesures. És d'esperar que les mesures més acurades d'UVI s'obtinguin amb espectroradiòmetres. No obstant, com que els costs d'aquests dispositius són elevats, és més habitual trobar dades d'UVI de radiòmetres eritemàtics (de fet, la majoria de les xarxes d'UVI estan equipades amb aquest tipus de sensors). Els millors resultats en modelització s'obtenen amb models de transferència radiativa de dispersió múltiple quan es coneix bé la informació d'entrada. No obstant, habitualment no es coneix informació d'entrada, com per exemple les propietats òptiques dels aerosols, la qual cosa pot portar a importants incerteses en la modelització. Sovint, s'utilitzen models més simples per aplicacions com ara la predicció d'UVI o l'elaboració de mapes d'UVI, ja que aquests són més ràpids i requereixen menys paràmetres d'entrada. Tenint en compte aquest marc de treball, l'objectiu general d'aquest estudi és analitzar l'acord al qual es pot arribar entre la mesura i la modelització d'UVI per condicions de cel sense núvols. D'aquesta manera, en aquest estudi es presenten comparacions model-mesura per diferents tècniques de modelització, diferents opcions d'entrada i per mesures d'UVI tant de radiòmetres eritemàtics com d'espectroradiòmeters. Com a conclusió general, es pot afirmar que la comparació model-mesura és molt útil per detectar limitacions i estimar incerteses tant en les modelitzacions com en les mesures. Pel que fa a la modelització, les principals limitacions que s'han trobat és la falta de coneixement de la informació d'aerosols considerada com a entrada dels models. També, s'han trobat importants diferències entre l'ozó mesurat des de satèl·lit i des de la superfície terrestre, la qual cosa pot portar a diferències importants en l'UVI modelitzat. PTUV, una nova i simple parametrització pel càlcul ràpid d'UVI per condicions de cel serens, ha estat desenvolupada en base a càlculs de transferència radiativa. La parametrització mostra una bona execució tant respecte el model base com en comparació amb diverses mesures d'UVI. PTUV ha demostrat la seva utilitat per aplicacions particulars com ara l'estudi de l'evolució anual de l'UVI per un cert lloc (Girona) i la composició de mapes d'alta resolució de valors d'UVI típics per un territori concret (Catalunya). En relació a les mesures, es constata que és molt important saber la resposta espectral dels radiòmetres eritemàtics per tal d'evitar grans incerteses a la mesura d'UVI. Aquest instruments, si estan ben caracteritzats, mostren una bona comparació amb els espectroradiòmetres d'alta qualitat en la mesura d'UVI. Les qüestions més importants respecte les mesures són la calibració i estabilitat a llarg termini. També, s'ha observat un efecte de temperatura en el PTFE, un material utilitzat en els difusors en alguns instruments, cosa que potencialment podria tenir implicacions importants en el camp experimental. Finalment, i pel que fa a les comparacions model-mesura, el millor acord s'ha trobat quan es consideren mesures d'UVI d'espectroradiòmetres d'alta qualitat i s'usen models de transferència radiativa que consideren les millors dades disponibles pel que fa als paràmetres òptics d'ozó i aerosols i els seus canvis en el temps. D'aquesta manera, l'acord pot ser tan alt dins un 0.1º% en UVI, i típicament entre menys d'un 3%. Aquest acord es veu altament deteriorat si s'ignora la informació d'aerosols i depèn de manera important del valor d'albedo de dispersió simple dels aerosols. Altres dades d'entrada del model, com ara l'albedo superficial i els perfils d'ozó i temperatura introdueixen una incertesa menor en els resultats de modelització.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Snakes are thought as fear-relevant stimuli (biologically prepared to be associated with fear) which can lead to an enhanced attentional capture when compared fear-irrelevant stimuli. Inherent limitations related to the key-press behaviour might be bypassed with the measurement of eye movements, since they are more closely related to attentional processes than reaction times. An eye tracking technique was combined with the flicker paradigm in two studies. A sample of university students was gathered. In both studies, an instruction to detect changes between the pair of scenes was given. Attentional orienting for the changing element in the scene was analyzed, as well the role of fear of snakes as a moderator variable. The results for both studies revealed a significant shorter time to first fixation for snake stimuli when compared to control stimuli. A facilitating effect of fear of snakes was also found for snakes, presenting the highly fear participants a shorter a time to first fixation for snake stimuli when compared to low-feared participants. The results are in line with current research that supports the advantage of snakes to grab attention due their evo-biological significance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eye tracking has become a preponderant technique in the evaluation of user interaction and behaviour with study objects in defined contexts. Common eye tracking related data representation techniques offer valuable input regarding user interaction and eye gaze behaviour, namely through fixations and saccades measurement. However, these and other techniques may be insufficient for the representation of acquired data in specific studies, namely because of the complexity of the study object being analysed. This paper intends to contribute with a summary of data representation and information visualization techniques used in data analysis within different contexts (advertising, websites, television news and video games). Additionally, several methodological approaches are presented in this paper, which resulted from several studies developed and under development at CETAC.MEDIA - Communication Sciences and Technologies Research Centre. In the studies described, traditional data representation techniques were insufficient. As a result, new approaches were necessary and therefore, new forms of representing data, based on common techniques were developed with the objective of improving communication and information strategies. In each of these studies, a brief summary of the contribution to their respective area will be presented, as well as the data representation techniques used and some of the acquired results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An instrument is described which carries three orthogonal geomagnetic field sensors on a standard meteorological balloon package, to sense rapid motion and position changes during ascent through the atmosphere. Because of the finite data bandwidth available over the UHF radio link, a burst sampling strategy is adopted. Bursts of 9s of measurements at 3.6Hz are interleaved with periods of slow data telemetry lasting 25s. Calculation of the variability in each channel is used to determine position changes, a method robust to periods of poor radio signals. During three balloon ascents, variability was found repeatedly at similar altitudes, simultaneously in each of three orthogonal sensors carried. This variability is attributed to atmospheric motions. It is found that the vertical sensor is least prone to stray motions, and that the use of two horizontal sensors provides no additional information over a single horizontal sensor