791 resultados para Digital mapping--Problems, exercises, etc.
Resumo:
Bidirectional DC-DC converters are widely used in different applications such as energy storage systems, Electric Vehicles (EVs), UPS, etc. In particular, future EVs require bidirectional power flow in order to integrate energy storage units into smart grids. These bidirectional power converters provide Grid to Vehicle (V2G)/ Vehicle to Grid (G2V) power flow capability for future EVs. Generally, there are two control loops used for bidirectional DC-DC converters: The inner current loop and The outer loop. The control of DAB converters used in EVs are proved to be challenging due to the wide range of operating conditions and non-linear behavior of the converter. In this thesis, the precise mathematical model of the converter is derived and non-linear control schemes are proposed for the control system of bidirectional DC-DC converters based on the derived model. The proposed inner current control technique is developed based on a novel Geometric-Sequence Control (GSC) approach. The proposed control technique offers significantly improved performance as compared to one for conventional control approaches. The proposed technique utilizes a simple control algorithm which saves on the computational resources. Therefore, it has higher reliability, which is essential in this application. Although, the proposed control technique is based on the mathematical model of the converter, its robustness against parameter uncertainties is proven. Three different control modes for charging the traction batteries in EVs are investigated in this thesis: the voltage mode control, the current mode control, and the power mode control. The outer loop control is determined by each of the three control modes. The structure of the outer control loop provides the current reference for the inner current loop. Comprehensive computer simulations have been conducted in order to evaluate the performance of the proposed control methods. In addition, the proposed control have been verified on a 3.3 kW experimental prototype. Simulation and experimental results show the superior performance of the proposed control techniques over the conventional ones.
Resumo:
The Pico de Navas landslide was a large-magnitude rotational movement, affecting 50x106m3 of hard to soft rocks. The objectives of this study were: (1) to characterize the landslide in terms of geology, geomorphological features and geotechnical parameters; and (2) to obtain an adequate geomechanical model to comprehensively explain its rupture, considering topographic, hydro-geological and geomechanical conditions. The rupture surface crossed, from top to bottom: (a) more than 200 m of limestone and clay units of the Upper Cretaceous, affected by faults; and (b) the Albian unit of Utrillas facies composed of silty sand with clay (Kaolinite) of the Lower Cretaceous. This sand played an important role in the basal failure of the slide due to the influence of fine particles (silt and clay), which comprised on average more than 70% of the sand, and the high content presence of kaolinite (>40%) in some beds. Its geotechnical parameters are: unit weight (δ) = 19-23 KN/m3; friction angle (φ) = 13º-38º and cohesion (c) = 10-48 KN/m2. Its microstructure consists of accumulations of kaolinite crystals stuck to terrigenous grains, making clayey peds. We hypothesize that the presence of these aggregates was the internal cause of fluidification of this layer once wet. Besides the faulted structure of the massif, other conditioning factors of the movement were: the large load of the upper limestone layers; high water table levels; high water pore pressure; and the loss of strength due to wet conditions. The 3D simulation of the stability conditions concurs with our hypothesis. The landslide occurred in the Recent or Middle Holocene, certainly before at least 500 BC and possibly during a wet climate period. Today, it appears to be inactive. This study helps to understand the frequent slope instabilities all along the Iberian Range when facies Utrillas is present.
Resumo:
This article presents a methodological proposition to map the diversity of the audiovisual industry in the digital scenario by portraying the most important interactions between those who create, produce, distribute and disseminate audiovisual productions on line, paying special attention to powerful intermediaries and to small and medium independent agents. Taking as a point of departure a flexible understanding of social network analysis, the aim is to understand the structure of the audiovisual industry on the internet so that, taking into consideration a given sector, agents, their relations and the networks they give place to – as well as the structural conditions under which they operate – are studied. The aim is to answer questions such as: what is mapping, what is of interesting to map, how can it be done and what advantages and disadvantages the results will present.
Resumo:
Educational systems worldwide are facing an enormous shift as a result of sociocultural, political, economic, and technological changes. The technologies and practices that have developed over the last decade have been heralded as opportunities to transform both online and traditional education systems. While proponents of these new ideas often postulate that they have the potential to address the educational problems facing both students and institutions and that they could provide an opportunity to rethink the ways that education is organized and enacted, there is little evidence of emerging technologies and practices in use in online education. Because researchers and practitioners interested in these possibilities often reside in various disciplines and academic departments the sharing and dissemination of their work across often rigid boundaries is a formidable task. Contributors to Emergence and Innovation in Digital Learning include individuals who are shaping the future of online learning with their innovative applications and investigations on the impact of issues such as openness, analytics, MOOCs, and social media. Building on work first published in Emerging Technologies in Distance Education, the contributors to this collection harness the dispersed knowledge in online education to provide a one-stop locale for work on emergent approaches in the field. Their conclusions will influence the adoption and success of these approaches to education and will enable researchers and practitioners to conceptualize, critique, and enhance their understanding of the foundations and applications of new technologies.
Resumo:
The development of broadband Internet connections has fostered new audiovisual media services and opened new possibilities for accessing broadcasts. The Internet retransmission case of TVCatchup before the CJEU was the first case concerning new technologies in the light of Art. 3.1. of the Information Society Directive. On the other side of the Atlantic the Aereo case reached the U.S. Supreme Court and challenged the interpretation of public performance rights. In both cases the recipients of the services could receive broadcast programs in a way alternative to traditional broadcasting channels including terrestrial broadcasting or cable transmission. The Aereo case raised the debate on the possible impact of the interpretation of copyright law in the context of the development of new technologies, particularly cloud based services. It is interesting to see whether any similar problems occur in the EU. The „umbrella” in the title refers to Art. 8 WCT, which covers digital and Internet transmission and constitutes the backrgound for the EU and the U.S. legal solutions. The article argues that no international standard for qualification of the discussed services exists.
Resumo:
This keynote presentation will report some of our research work and experience on the development and applications of relevant methods, models, systems and simulation techniques in support of different types and various levels of decision making for business, management and engineering. In particular, the following topics will be covered. Modelling, multi-agent-based simulation and analysis of the allocation management of carbon dioxide emission permits in China (Nanfeng Liu & Shuliang Li Agent-based simulation of the dynamic evolution of enterprise carbon assets (Yin Zeng & Shuliang Li) A framework & system for extracting and representing project knowledge contexts using topic models and dynamic knowledge maps: a big data perspective (Jin Xu, Zheng Li, Shuliang Li & Yanyan Zhang) Open innovation: intelligent model, social media & complex adaptive system simulation (Shuliang Li & Jim Zheng Li) A framework, model and software prototype for modelling and simulation for deshopping behaviour and how companies respond (Shawkat Rahman & Shuliang Li) Integrating multiple agents, simulation, knowledge bases and fuzzy logic for international marketing decision making (Shuliang Li & Jim Zheng Li) A Web-based hybrid intelligent system for combined conventional, digital, mobile, social media and mobile marketing strategy formulation (Shuliang Li & Jim Zheng Li) A hybrid intelligent model for Web & social media dynamics, and evolutionary and adaptive branding (Shuliang Li) A hybrid paradigm for modelling, simulation and analysis of brand virality in social media (Shuliang Li & Jim Zheng Li) Network configuration management: attack paradigms and architectures for computer network survivability (Tero Karvinen & Shuliang Li)
Resumo:
Companies have always been organized by processes, often imperceptible to its employees. With the advancement of technology, organizational processes currently run an organization through computers, and thus generate immediate information that is available to each sector. With the objective of seeking business information in real time, the government created the SPED - Public System of Digital, which involves three subsystems, which are the Electronic Invoice, Digital Accounting Bookkeeping and Digital Tax Bookkeeping. This system is revolutionizing the business structures when gathering, in an innovative way, all information and interlinked business processes. For the implementation of SPED, a revision in the organizational processes is required, since the information is generated and is sent online to the government, without mistakes. Thus the study aimed to analyze the change brought about by the implementation of the Public System of Digital SPED in the main business processes. In order to do so, we have performed a multiple case study involving three companies in the state of Para, two operate in wholesale and one explores agribusiness. The Data collection was performed by accounting professionals, IT and managers. According to the results obtained, it was found that in two companies, the IT infrastructure was capable of deploying the new system without major problems, while one company had more difficulties to cope with the new system. However, all companies had to examine its processes to make the customizations needed to fit. It was also observed that there is no IT Governance in two companies. Therefore, we recommend the use of an appropriate model, not only for the implementation of SPED, but as a way to manage and extract better results from investment in information technology
Resumo:
This article discusses the contribution of critical political economy approaches to digital journalism studies and argues that these offer important correctives to celebratory perspectives. The first part offers a review and critique of influential claims arising from self-styled new studies of convergence culture, media and creative industries. The second part discusses the contribution of critical political economy in examining digital journalism and responding to celebrant claims. The final part reflects on problems of restrictive normativity and other limitations within media political economy perspectives and considers ways in which challenges might be addressed by more synthesising approaches. The paper proposes developing radical pluralist, media systems and comparative analysis, and advocates drawing on strengths in both political economy and culturalist traditions to map and evaluate practices across all sectors of digital journalism.
Resumo:
Children with Attention-Deficit/Hyperactivity Disorder (ADHD) are at increased risk for the development of depression and delinquent behavior. Children and adolescents with ADHD also experience difficulty creating/maintaining high quality friendships and parent-child relationships, and these difficulties may contribute to the development of co-morbid internalizing and externalizing symptoms in adolescence. However, there is limited research examining whether high quality friendships and parent-child relationships mediate the relation between ADHD and the emergence of these co-morbid symptoms at the transition to high school. This study examines the mediating role of relationship quality in the association between ADHD and depressive symptoms/delinquent behaviors at this developmentally significant transition point. Results revealed significant indirect effects of grade 6 attention problems on grade 9 depressive symptoms through friendship quality and quality of the mother-child relationship in grade 8. Interventions targeting parent and peer relationships may be valuable for youth with ADHD to promote successful transitions to high school.
Resumo:
A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available.
Resumo:
Digital data from various scientific fields is stored in separate information systems („FIS geology“, „FIS pedology“, etc.) in the Lower Saxony Geo-Information System NIBIS so that it can be processed and interpreted; this is necessary to meet increasing demand for soil-relevant information for decision-making and planning purposes. The necessary work will be considerably accelerated and its quality improved by setting up and actually using such a tool. A detailed account is given of the Lower Saxony Geo-Information System NIBIS, in particular how the data base is set up and how the NIBIS is used in cases where concrete problems occured.
Resumo:
Many applications, including communications, test and measurement, and radar, require the generation of signals with a high degree of spectral purity. One method for producing tunable, low-noise source signals is to combine the outputs of multiple direct digital synthesizers (DDSs) arranged in a parallel configuration. In such an approach, if all noise is uncorrelated across channels, the noise will decrease relative to the combined signal power, resulting in a reduction of sideband noise and an increase in SNR. However, in any real array, the broadband noise and spurious components will be correlated to some degree, limiting the gains achieved by parallelization. This thesis examines the potential performance benefits that may arise from using an array of DDSs, with a focus on several types of common DDS errors, including phase noise, phase truncation spurs, quantization noise spurs, and quantizer nonlinearity spurs. Measurements to determine the level of correlation among DDS channels were made on a custom 14-channel DDS testbed. The investigation of the phase noise of a DDS array indicates that the contribution to the phase noise from the DACs can be decreased to a desired level by using a large enough number of channels. In such a system, the phase noise qualities of the source clock and the system cost and complexity will be the main limitations on the phase noise of the DDS array. The study of phase truncation spurs suggests that, at least in our system, the phase truncation spurs are uncorrelated, contrary to the theoretical prediction. We believe this decorrelation is due to the existence of an unidentified mechanism in our DDS array that is unaccounted for in our current operational DDS model. This mechanism, likely due to some timing element in the FPGA, causes some randomness in the relative phases of the truncation spurs from channel to channel each time the DDS array is powered up. This randomness decorrelates the phase truncation spurs, opening the potential for SFDR gain from using a DDS array. The analysis of the correlation of quantization noise spurs in an array of DDSs shows that the total quantization noise power of each DDS channel is uncorrelated for nearly all values of DAC output bits. This suggests that a near N gain in SQNR is possible for an N-channel array of DDSs. This gain will be most apparent for low-bit DACs in which quantization noise is notably higher than the thermal noise contribution. Lastly, the measurements of the correlation of quantizer nonlinearity spurs demonstrate that the second and third harmonics are highly correlated across channels for all frequencies tested. This means that there is no benefit to using an array of DDSs for the problems of in-band quantizer nonlinearities. As a result, alternate methods of harmonic spur management must be employed.
Resumo:
Document representations can rapidly become unwieldy if they try to encapsulate all possible document properties, ranging from abstract structure to detailed rendering and layout. We present a composite document approach wherein an XMLbased document representation is linked via a shadow tree of bi-directional pointers to a PDF representation of the same document. Using a two-window viewer any material selected in the PDF can be related back to the corresponding material in the XML, and vice versa. In this way the treatment of specialist material such as mathematics, music or chemistry (e.g. via read aloud or play aloud ) can be activated via standard tools working within the XML representation, rather than requiring that application-specific structures be embedded in the PDF itself. The problems of textual recognition and tree pattern matching between the two representations are discussed in detail. Comparisons are drawn between our use of a shadow tree of pointers to map between document representations and the use of a code-replacement shadow tree in technologies such as XBL.
Collection-Level Subject Access in Aggregations of Digital Collections: Metadata Application and Use
Resumo:
Problems in subject access to information organization systems have been under investigation for a long time. Focusing on item-level information discovery and access, researchers have identified a range of subject access problems, including quality and application of metadata, as well as the complexity of user knowledge required for successful subject exploration. While aggregations of digital collections built in the United States and abroad generate collection-level metadata of various levels of granularity and richness, no research has yet focused on the role of collection-level metadata in user interaction with these aggregations. This dissertation research sought to bridge this gap by answering the question “How does collection-level metadata mediate scholarly subject access to aggregated digital collections?” This goal was achieved using three research methods: • in-depth comparative content analysis of collection-level metadata in three large-scale aggregations of cultural heritage digital collections: Opening History, American Memory, and The European Library • transaction log analysis of user interactions, with Opening History, and • interview and observation data on academic historians interacting with two aggregations: Opening History and American Memory. It was found that subject-based resource discovery is significantly influenced by collection-level metadata richness. The richness includes such components as: 1) describing collection’s subject matter with mutually-complementary values in different metadata fields, and 2) a variety of collection properties/characteristics encoded in the free-text Description field, including types and genres of objects in a digital collection, as well as topical, geographic and temporal coverage are the most consistently represented collection characteristics in free-text Description fields. Analysis of user interactions with aggregations of digital collections yields a number of interesting findings. Item-level user interactions were found to occur more often than collection-level interactions. Collection browse is initiated more often than search, while subject browse (topical and geographic) is used most often. Majority of collection search queries fall within FRBR Group 3 categories: object, concept, and place. Significantly more object, concept, and corporate body searches and less individual person, event and class of persons searches were observed in collection searches than in item searches. While collection search is most often satisfied by Description and/or Subjects collection metadata fields, it would not retrieve a significant proportion of collection records without controlled-vocabulary subject metadata (Temporal Coverage, Geographic Coverage, Subjects, and Objects), and free-text metadata (the Description field). Observation data shows that collection metadata records in Opening History and American Memory aggregations are often viewed. Transaction log data show a high level of engagement with collection metadata records in Opening History, with the total page views for collections more than 4 times greater than item page views. Scholars observed viewing collection records valued descriptive information on provenance, collection size, types of objects, subjects, geographic coverage, and temporal coverage information. They also considered the structured display of collection metadata in Opening History more useful than the alternative approach taken by other aggregations, such as American Memory, which displays only the free-text Description field to the end-user. The results extend the understanding of the value of collection-level subject metadata, particularly free-text metadata, for the scholarly users of aggregations of digital collections. The analysis of the collection metadata created by three large-scale aggregations provides a better understanding of collection-level metadata application patterns and suggests best practices. This dissertation is also the first empirical research contribution to test the FRBR model as a conceptual and analytic framework for studying collection-level subject access.
Resumo:
This thesis deals with tensor completion for the solution of multidimensional inverse problems. We study the problem of reconstructing an approximately low rank tensor from a small number of noisy linear measurements. New recovery guarantees, numerical algorithms, non-uniform sampling strategies, and parameter selection algorithms are developed. We derive a fixed point continuation algorithm for tensor completion and prove its convergence. A restricted isometry property (RIP) based tensor recovery guarantee is proved. Probabilistic recovery guarantees are obtained for sub-Gaussian measurement operators and for measurements obtained by non-uniform sampling from a Parseval tight frame. We show how tensor completion can be used to solve multidimensional inverse problems arising in NMR relaxometry. Algorithms are developed for regularization parameter selection, including accelerated k-fold cross-validation and generalized cross-validation. These methods are validated on experimental and simulated data. We also derive condition number estimates for nonnegative least squares problems. Tensor recovery promises to significantly accelerate N-dimensional NMR relaxometry and related experiments, enabling previously impractical experiments. Our methods could also be applied to other inverse problems arising in machine learning, image processing, signal processing, computer vision, and other fields.