911 resultados para CIDOC Conceptual Reference Model
Resumo:
Pervasive Sensing is a recent research trend that aims at providing widespread computing and sensing capabilities to enable the creation of smart environments that can sense, process, and act by considering input coming from both people and devices. The capabilities necessary for Pervasive Sensing are nowadays available on a plethora of devices, from embedded devices to PCs and smartphones. The wide availability of new devices and the large amount of data they can access enable a wide range of novel services in different areas, spanning from simple data collection systems to socially-aware collaborative filtering. However, the strong heterogeneity and unreliability of devices and sensors poses significant challenges. So far, existing works on Pervasive Sensing have focused only on limited portions of the whole stack of available devices and data that they can use, to propose and develop mainly vertical solutions. The push from academia and industry for this kind of services shows that time is mature for a more general support framework for Pervasive Sensing solutions able to enhance frail architectures, promote a well balanced usage of resources on different devices, and enable the widest possible access to sensed data, while ensuring a minimal energy consumption on battery-operated devices. This thesis focuses on pervasive sensing systems to extract design guidelines as foundation of a comprehensive reference model for multi-tier Pervasive Sensing applications. The validity of the proposed model is tested in five different scenarios that present peculiar and different requirements, and different hardware and sensors. The ease of mapping from the proposed logical model to the real implementations and the positive performance result campaigns prove the quality of the proposed approach and offer a reliable reference model, together with a direction for the design and deployment of future Pervasive Sensing applications.
Resumo:
In dieser Arbeit wurden Simulation von Flüssigkeiten auf molekularer Ebene durchgeführt, wobei unterschiedliche Multi-Skalen Techniken verwendet wurden. Diese erlauben eine effektive Beschreibung der Flüssigkeit, die weniger Rechenzeit im Computer benötigt und somit Phänomene auf längeren Zeit- und Längenskalen beschreiben kann.rnrnEin wesentlicher Aspekt ist dabei ein vereinfachtes (“coarse-grained”) Modell, welches in einem systematischen Verfahren aus Simulationen des detaillierten Modells gewonnen wird. Dabei werden ausgewählte Eigenschaften des detaillierten Modells (z.B. Paar-Korrelationsfunktion, Druck, etc) reproduziert.rnrnEs wurden Algorithmen untersucht, die eine gleichzeitige Kopplung von detaillierten und vereinfachten Modell erlauben (“Adaptive Resolution Scheme”, AdResS). Dabei wird das detaillierte Modell in einem vordefinierten Teilvolumen der Flüssigkeit (z.B. nahe einer Oberfläche) verwendet, während der Rest mithilfe des vereinfachten Modells beschrieben wird.rnrnHierzu wurde eine Methode (“Thermodynamische Kraft”) entwickelt um die Kopplung auch dann zu ermöglichen, wenn die Modelle in verschiedenen thermodynamischen Zuständen befinden. Zudem wurde ein neuartiger Algorithmus der Kopplung beschrieben (H-AdResS) der die Kopplung mittels einer Hamilton-Funktion beschreibt. In diesem Algorithmus ist eine zur Thermodynamischen Kraft analoge Korrektur mit weniger Rechenaufwand möglich.rnrnAls Anwendung dieser grundlegenden Techniken wurden Pfadintegral Molekulardynamik (MD) Simulationen von Wasser untersucht. Mithilfe dieser Methode ist es möglich, quantenmechanische Effekte der Kerne (Delokalisation, Nullpunktsenergie) in die Simulation einzubeziehen. Hierbei wurde zuerst eine Multi-Skalen Technik (“Force-matching”) verwendet um eine effektive Wechselwirkung aus einer detaillierten Simulation auf Basis der Dichtefunktionaltheorie zu extrahieren. Die Pfadintegral MD Simulation verbessert die Beschreibung der intra-molekularen Struktur im Vergleich mit experimentellen Daten. Das Modell eignet sich auch zur gleichzeitigen Kopplung in einer Simulation, wobei ein Wassermolekül (beschrieben durch 48 Punktteilchen im Pfadintegral-MD Modell) mit einem vereinfachten Modell (ein Punktteilchen) gekoppelt wird. Auf diese Weise konnte eine Wasser-Vakuum Grenzfläche simuliert werden, wobei nur die Oberfläche im Pfadintegral Modell und der Rest im vereinfachten Modell beschrieben wird.
Resumo:
The interest in automatic volume meshing for finite element analysis (FEA) has grown more since the appearance of microfocus CT (μCT), due to its high resolution, which allows for the assessment of mechanical behaviour at a high precision. Nevertheless, the basic meshing approach of generating one hexahedron per voxel produces jagged edges. To prevent this effect, smoothing algorithms have been introduced to enhance the topology of the mesh. However, whether smoothing also improves the accuracy of voxel-based meshes in clinical applications is still under question. There is a trade-off between smoothing and quality of elements in the mesh. Distorted elements may be produced by excessive smoothing and reduce accuracy of the mesh. In the present work, influence of smoothing on the accuracy of voxel-based meshes in micro-FE was assessed. An accurate 3D model of a trabecular structure with known apparent mechanical properties was used as a reference model. Virtual CT scans of this reference model (with resolutions of 16, 32 and 64 μm) were then created and used to build voxel-based meshes of the microarchitecture. Effects of smoothing on the apparent mechanical properties of the voxel-based meshes as compared to the reference model were evaluated. Apparent Young’s moduli of the smooth voxel-based mesh were significantly closer to those of the reference model for the 16 and 32 μm resolutions. Improvements were not significant for the 64 μm, due to loss of trabecular connectivity in the model. This study shows that smoothing offers a real benefit to voxel-based meshes used in micro-FE. It might also broaden voxel-based meshing to other biomechanical domains where it was not used previously due to lack of accuracy. As an example, this work will be used in the framework of the European project ContraCancrum, which aims at providing a patient-specific simulation of tumour development in brain and lungs for oncologists. For this type of clinical application, such a fast, automatic, and accurate generation of the mesh is of great benefit.
Resumo:
A dynamic deterministic simulation model was developed to assess the impact of different putative control strategies on the seroprevalence of Neospora caninum in female Swiss dairy cattle. The model structure comprised compartments of "susceptible" and "infected" animals (SI-model) and the cattle population was divided into 12 age classes. A reference model (Model 1) was developed to simulate the current (status quo) situation (present seroprevalence in Switzerland 12%), taking into account available demographic and seroprevalence data of Switzerland. Model 1 was modified to represent four putative control strategies: testing and culling of seropositive animals (Model 2), discontinued breeding with offspring from seropositive cows (Model 3), chemotherapeutic treatment of calves from seropositive cows (Model 4), and vaccination of susceptible and infected animals (Model 5). Models 2-4 considered different sub-scenarios with regard to the frequency of diagnostic testing. Multivariable Monte Carlo sensitivity analysis was used to assess the impact of uncertainty in input parameters. A policy of annual testing and culling of all seropositive cattle in the population reduced the seroprevalence effectively and rapidly from 12% to <1% in the first year of simulation. The control strategies with discontinued breeding with offspring from all seropositive cows, chemotherapy of calves and vaccination of all cattle reduced the prevalence more slowly than culling but were still very effective (reduction of prevalence below 2% within 11, 23 and 3 years of simulation, respectively). However, sensitivity analyses revealed that the effectiveness of these strategies depended strongly on the quality of the input parameters used, such as the horizontal and vertical transmission factors, the sensitivity of the diagnostic test and the efficacy of medication and vaccination. Finally, all models confirmed that it was not possible to completely eradicate N. caninum as long as the horizontal transmission process was not interrupted.
Resumo:
Electrical Power Assisted Steering system (EPAS) will likely be used on future automotive power steering systems. The sinusoidal brushless DC (BLDC) motor has been identified as one of the most suitable actuators for the EPAS application. Motor characteristic variations, which can be indicated by variations of the motor parameters such as the coil resistance and the torque constant, directly impart inaccuracies in the control scheme based on the nominal values of parameters and thus the whole system performance suffers. The motor controller must address the time-varying motor characteristics problem and maintain the performance in its long service life. In this dissertation, four adaptive control algorithms for brushless DC (BLDC) motors are explored. The first algorithm engages a simplified inverse dq-coordinate dynamics controller and solves for the parameter errors with the q-axis current (iq) feedback from several past sampling steps. The controller parameter values are updated by slow integration of the parameter errors. Improvement such as dynamic approximation, speed approximation and Gram-Schmidt orthonormalization are discussed for better estimation performance. The second algorithm is proposed to use both the d-axis current (id) and the q-axis current (iq) feedback for parameter estimation since id always accompanies iq. Stochastic conditions for unbiased estimation are shown through Monte Carlo simulations. Study of the first two adaptive algorithms indicates that the parameter estimation performance can be achieved by using more history data. The Extended Kalman Filter (EKF), a representative recursive estimation algorithm, is then investigated for the BLDC motor application. Simulation results validated the superior estimation performance with the EKF. However, the computation complexity and stability may be barriers for practical implementation of the EKF. The fourth algorithm is a model reference adaptive control (MRAC) that utilizes the desired motor characteristics as a reference model. Its stability is guaranteed by Lyapunov’s direct method. Simulation shows superior performance in terms of the convergence speed and current tracking. These algorithms are compared in closed loop simulation with an EPAS model and a motor speed control application. The MRAC is identified as the most promising candidate controller because of its combination of superior performance and low computational complexity. A BLDC motor controller developed with the dq-coordinate model cannot be implemented without several supplemental functions such as the coordinate transformation and a DC-to-AC current encoding scheme. A quasi-physical BLDC motor model is developed to study the practical implementation issues of the dq-coordinate control strategy, such as the initialization and rotor angle transducer resolution. This model can also be beneficial during first stage development in automotive BLDC motor applications.
Resumo:
Teaching is a dynamic activity. It can be very effective, if its impact is constantly monitored and adjusted to the demands of changing social contexts and needs of learners. This implies that teachers need to be aware about teaching and learning processes. Moreover, they should constantly question their didactical methods and the learning resources, which they provide to their students. They should reflect if their actions are suitable, and they should regulate their teaching, e.g., by updating learning materials based on new knowledge about learners, or by motivating learners to engage in further learning activities. In the last years, a rising interest in ‘learning analytics’ is observable. This interest is motivated by the availability of massive amounts of educational data. Also, the continuously increasing processing power, and a strong motivation for discovering new information from these pools of educational data, is pushing further developments within the learning analytics research field. Learning analytics could be a method for reflective teaching practice that enables and guides teachers to investigate and evaluate their work in future learning scenarios. However, this potentially positive impact has not yet been sufficiently verified by learning analytics research. Another method that pursues these goals is ‘action research’. Learning analytics promises to initiate action research processes because it facilitates awareness, reflection and regulation of teaching activities analogous to action research. Therefore, this thesis joins both concepts, in order to improve the design of learning analytics tools. Central research question of this thesis are: What are the dimensions of learning analytics in relation to action research, which need to be considered when designing a learning analytics tool? How does a learning analytics dashboard impact the teachers of technology-enhanced university lectures regarding ‘awareness’, ‘reflection’ and ‘action’? Does it initiate action research? Which are central requirements for a learning analytics tool, which pursues such effects? This project followed design-based research principles, in order to answer these research questions. The main contributions are: a theoretical reference model that connects action research and learning analytics, the conceptualization and implementation of a learning analytics tool, a requirements catalogue for useful and usable learning analytics design based on evaluations, a tested procedure for impact analysis, and guidelines for the introduction of learning analytics into higher education.
Resumo:
In recent years, learning analytics (LA) has attracted a great deal of attention in technology-enhanced learning (TEL) research as practitioners, institutions, and researchers are increasingly seeing the potential that LA has to shape the future TEL landscape. Generally, LA deals with the development of methods that harness educational data sets to support the learning process. This paper provides a foundation for future research in LA. It provides a systematic overview on this emerging field and its key concepts through a reference model for LA based on four dimensions, namely data, environments, context (what?), stakeholders (who?), objectives (why?), and methods (how?). It further identifies various challenges and research opportunities in the area of LA in relation to each dimension.
Resumo:
McMurdo Dry Valleys (MDV, Ross Sea region, Antarctica) precipitation exhibits extreme seasonality in ion concentration, 3-5 orders of magnitude between summer and winter precipitation. To identify aerosol sources and investigate causes for the observed amplitude in concentration variability, four snow pits were sampled along a coast-Polar Plateau transect across the MDV. The elevation of the sites ranges from 50 to 2400 m and the distance from the coast from 8 to 93 km. Average chemistry gradients along the transect indicate that most species have either a predominant marine or terrestrial source in the MDV. Empirical orthogonal function analysis on the snow-chemistry time series shows that at least 57% of aerosol deposition occurs concurrently. A conceptual climate model, based on meteorological observations, is used to explain the strong seasonality in the MDV. Our results suggest that radiative forcing of the ice-free valleys creates a surface low-pressure cell during summer which promotes air-mass flow from the Ross Sea. The associated precipitating air mass is relatively warm, humid and contains a high concentration of aerosols. During winter, the MDV are dominated by air masses draining off the East Antarctic ice sheet, that are characterized by cold, dry and low concentrations of aerosols. The strong differences between these two air-mass sources create in the MDV a polar version of the monsoonal flow, with humid, warm summers and dry, cold winters.
Resumo:
We present a conceptual prototype model of a focal plane array unit for the STEAMR instrument, highlighting the challenges presented by the required high relative beam proximity of the instrument and focus on how edge-diffraction effects contribute to the array's performance. The analysis was carried out as a comparative process using both PO & PTD and MoM techniques. We first highlight general differences between these computational techniques, with the discussion focusing on diffractive edge effects for near-field imaging reflectors with high truncation. We then present the results of in-depth modeling analyses of the STEAMR focal plane array followed by near-field antenna measurements of a breadboard model of the array. The results of these near-field measurements agree well with both simulation techniques although MoM shows slightly higher complex beam coupling to the measurements than PO & PTD.
Resumo:
Lateral diffusivity is computed from a tracer release experiment in the northeastern tropical Atlantic thermocline. The uncertainties of the estimates are inferred from a synthetic particle release using a high-resolution ocean circulation model. The main method employed to compute zonal and meridional components of lateral diffusivity is the growth of the second moment of a cloud of tracer. The application of an areal comparison method for estimating tracer-based diffusivity in the field experiments is also discussed. The best estimate of meridional eddy diffusivity in the Guinea Upwelling region at about 300 m depth is estimated to be inline image m2 s-1. The zonal component of lateral diffusivity is estimated to be inline image m2 s-1, while areal comparison method yields areal equivalent zonal diffusivity component of inline image m2 s?1. In comparison to Ky, Kx is about twice larger, resulting from the tracer patch stretching by zonal jets. Employed conceptual jet model indicates that zonal jet velocities of about inline image m s?1 are required to explain the enhancement of the zonal eddy diffusivity component. Finally, different sampling strategies are tested on synthetic tracer release experiments. They indicate that the best sampling strategy is a sparse regular sampling grid covering most of the tracer patch.
Resumo:
Continuous sea salt and mineral dust aerosol records have been studied on the two EPICA (European Project for Ice Coring in Antarctica) deep ice cores. The joint use of these records from opposite sides of the East Antarctic plateau allows for an estimate of changes in dust transport and emission intensity as well as for the identification of regional differences in the sea salt aerosol source. The mineral dust flux records at both sites show a strong coherency over the last 150 kyr related to dust emission changes in the glacial Patagonian dust source with three times higher dust fluxes in the Atlantic compared to the Indian Ocean sector of the Southern Ocean (SO). Using a simple conceptual transport model this indicates that transport can explain only 40% of the atmospheric dust concentration changes in Antarctica, while factor 5-10 changes occurred. Accordingly, the main cause for the strong glacial dust flux changes in Antarctica must lie in environmental changes in Patagonia. Dust emissions, hence environmental conditions in Patagonia, were very similar during the last two glacials and interglacials, respectively, despite 2-4 °C warmer temperatures recorded in Antarctica during the penultimate interglacial than today. 2-3 times higher sea salt fluxes found in both ice cores in the glacial compared to the Holocene are difficult to reconcile with a largely unchanged transport intensity and the distant open ocean source. The substantial glacial enhancements in sea salt aerosol fluxes can be readily explained assuming sea ice formation as the main sea salt aerosol source with a significantly larger expansion of (summer) sea ice in the Weddell Sea than in the Indian Ocean sector. During the penultimate interglacial, our sea salt records point to a 50% reduction of winter sea ice coverage compared to the Holocene both in the Indian and Atlantic Ocean sector of the SO. However, from 20 to 80 ka before present sea salt fluxes show only very subdued millennial changes despite pronounced temperature fluctuations, likely due to the large distance of the sea ice salt source to our drill sites.
Resumo:
Providing accurate maps of coral reefs where the spatial scale and labels of the mapped features correspond to map units appropriate for examining biological and geomorphic structures and processes is a major challenge for remote sensing. The objective of this work is to assess the accuracy and relevance of the process used to derive geomorphic zone and benthic community zone maps for three western Pacific coral reefs produced from multi-scale, object-based image analysis (OBIA) of high-spatial-resolution multi-spectral images, guided by field survey data. Three Quickbird-2 multi-spectral data sets from reefs in Australia, Palau and Fiji and georeferenced field photographs were used in a multi-scale segmentation and object-based image classification to map geomorphic zones and benthic community zones. A per-pixel approach was also tested for mapping benthic community zones. Validation of the maps and comparison to past approaches indicated the multi-scale OBIA process enabled field data, operator field experience and a conceptual hierarchical model of the coral reef environment to be linked to provide output maps at geomorphic zone and benthic community scales on coral reefs. The OBIA mapping accuracies were comparable with previously published work using other methods; however, the classes mapped were matched to a predetermined set of features on the reef.
Resumo:
Focus of this study is the analysis of a local hydrogeological system in the subhumid outer tropics in the western African country of Benin. The aim was to characterize, qualify and quantify the hydrogeological and hydrological properties of the approx. 30 km2 big study area and to develop a conceptual hydrogeological model. This model should provide the basis for further studies on a regional scale. The main goal was to obtain the process knowledge of the hydrogeological system and to determine the process and the quantity of the groundwater recharge in the working area. According to the objectives, a broad hydrogeological approach was chosen. In a spacious network on the local scale TDR probes, suction cups and groundwater observation bores were installed. Also in a multidisciplinary cooperation with hydrology, geography, soil science, biology, meteorology and plant nutrition sciences, instruments like discharge gauging stations, tensiometers, lysimeter, climate stations, runoff plots and erosion pins were installed in the test site for the investigation of the relevant parameters of the hydrological cycle.
Resumo:
La presente tesis analiza la integración del sector de las telecomunicaciones con los de TI y medios que conforman el actual hiper-sector de las TIC, para abordar una propuesta de valor que se plantea a dos niveles. Se expone de un lado, la iniciativa WIMS 2.0, que aborda los aspectos tecnológicos y estratégicos de la convergencia telco e Internet para, posteriormente, definir un nuevo modelo de negocio, que adaptado al nuevo sector integrado y siguiendo paradigmas inéditos como los que plantea la innovación abierta, permita generar nuevos flujos de ingresos en áreas no habituales para los operadores de telecomunicaciones. A lo largo del capítulo 2, el lector encontrará la contextualización del entorno de las comunicaciones de banda ancha desde tres vertientes: los aspectos tecnológicos, los económicos y el mercado actual, todo ello enfocado en una dimensión nacional, europea y mundial. Se establece de esta manera las bases para el desarrollo de los siguientes capítulos al demostrar cómo la penetración de la banda ancha ha potenciado el desarrollo de un nuevo sistema de valor en el sector integrado de las TIC, alrededor del cual surgen propuestas de modelos de negocio originales que se catalogan en una taxonomía propia. En el tercer capítulo se detalla la propuesta de valor de la iniciativa WIMS 2.0, fundada y liderada por el autor de esta tesis. WIMS 2.0, como iniciativa abierta, se dirige a la comunidad como una propuesta de un nuevo ecosistema y como un modelo de referencia integrado sobre el que desplegar servicios convergentes. Adicionalmente, sobre el planteamiento teórico definido se aporta el enfoque práctico que supone el despliegue del modelo de referencia dentro de la arquitectura de un operador como Telefónica. El capítulo 4 muestra el modelo de negocio Innovación 2.0, basado en la innovación abierta con el objetivo de capturar nuevos flujos de ingresos incrementando el portfolio de servicios innovadores gracias a las ideas frescas y brillantes de start-ups. Innovación 2.0 lejos de quedarse en una mera propuesta teórica, muestra sus bondades en el éxito práctico en el mercado que ha validado las hipótesis planteadas. El último capítulo plantea las líneas futuras de investigación tanto en el ámbito de la iniciativa WIMS 2.0 como en el modelo de Innovación 2.0, algunas de las cuales se están comenzando a abordar. 3 Abstract This thesis examines the integration of telecommunications sector with IT and media that make up the current hyper-ICT sector, to address a value proposition that arises at two levels. On one side, WIMS 2.0 initiative, which addresses the technological and strategic aspects of the telco and Internet convergence to later define a new business model, adapted to the new integrated sector and following original paradigms such as those posed by open innovation, which generates new revenue streams in areas not typical for telecom operators. Throughout Chapter 2, the reader will find the contextualization of the broadband communications environment from three aspects: technological, economic and the current market all focused on a national, European and world scale. Thus it establishes the basis for the development of the following chapters by demonstrating how the penetration of broadband has led to the development of a new value system in the integrated sector of the ICT, around which arise proposals of originals business models, which are categorized in a own taxonomy. The third chapter outlines the value proposition of the WIMS 2.0 initiative, founded and led by the author of this thesis. WIMS 2.0, as open initiative, presents to the community a proposal for a new ecosystem and an integrated reference model on which to deploy converged services. Additionally, above the theoretical approach defined, WIMS 2.0 provides the practical approach is provided which is the deployment of the reference model into the architecture of an operator such as Telefónica. Chapter 4 shows the Innovation 2.0 business model, based on open innovation with the goal of capturing new revenue streams by increasing the portfolio of innovative services thanks to the fresh and brilliant ideas from start-ups. Innovation 2.0, far from being a mere theoretical proposition, shows its benefits in the successful deployment in the market, which has validated the hypotheses. The last chapter sets out the future research at both the WIMS 2.0 initiative and Innovation 2.0 model, some of which are beginning to be addressed.
Resumo:
This paper analyzes the role of Computer Algebra Systems (CAS) in a model of learning based on competences. The proposal is an e-learning model Linear Algebra course for Engineering, which includes the use of a CAS (Maxima) and focuses on problem solving. A reference model has been taken from the Spanish Open University. The proper use of CAS is defined as an indicator of the generic ompetence: Use of Technology. Additionally, we show that using CAS could help to enhance the following generic competences: Self Learning, Planning and Organization, Communication and Writing, Mathematical and Technical Writing, Information Management and Critical Thinking.