933 resultados para proximity query, collision test, distance test, data compression, triangle test
Resumo:
The structure and infrastructure of the Mexican technical literature was determined. A representative database of technical articles was extracted from the Science Citation Index for the year 2002, with each article containing at least one author with a Mexican address. Many different manual and statistical clustering methods were used to identify the structure of the technical literature (especially the science and technology core competencies). One of the pervasive technical topics identified from the clustering, thin films research, was analyzed further using bibliometrics, in order to identify the infrastructure of this technology. Published by Elsevier Inc.
Resumo:
360-degree feedback from a variety of rater sources yields important information about leaders' styles, strengths and weaknesses for development. Results where observer ratings are discrepant (i.e., different) from self-ratings are often seen as indicators of problematic leadership relationships, skills, or lack of self-awareness. Yet research into the antecedents of such self-observer rating discrepancy suggests the presence of systematic influences, such as cultural values. The present study investigates the variation of rating discrepancies on three leadership skills (decision making, leading employees, and composure) in dependence of one exemplary culture dimension (power distance) on data from 31 countries using multilevel structural equation modelling. Results show that cultural values indeed predict self-observer rating discrepancies. Thus, systemic and contextual influences such as culture need to be taken into consideration when interpreting the importance and meaning of self-observer rating discrepancies in 360-degree instruments.
Resumo:
Purpose - The objective of this paper is to uncover the underlying dimensions of, and examine the similarities and differences in, personal uses of advertising, perceived socio-economic effects of advertising, and consumer beliefs and attitudes toward advertising in Bulgaria and Romania. Moreover, it aims to identify the relative importance of the predictors of attitudes toward advertising in the two countries. Design/methodology/approach - The paper draws upon findings of previous research and theoretical developments by Bauer and Greyser, Sandage and Leckenby, and Pollay and Mittal. The study uses a stratified random sample of 947 face-to-face interviews with adult respondents from major urban areas in Bulgaria (507) and Romania (440). Variables are measured on multi-item scales as a typical application of the reflective indicator model. Findings - Results show that there are significant differences between Romanian and Bulgarian respondents in terms of their attitudes toward advertising. Romanians are more positive about advertising as an institution than the instruments of advertising. Romanians seem to accept the role of advertising in a free market economy, but have less confidence in advertising claims and techniques. Bulgarian respondents seem more sceptical toward advertising in general and are less enthusiastic about embracing the role of advertising as an institution. Moreover, Bulgarians are highly negative towards the instruments advertising uses to convey its messages to consumers. Research limitations/implications - The research findings reflect the views of urban dwellers and may not be generalisable to the wider population of the two countries. Interviewer bias was reduced by eliminating verbal or non-verbal cues to the respondents, and by the use of stratified random sampling. Practical implications - The paper suggests that the regulatory role of codes of advertising practice and industry regulating bodies should be enhanced, and their ability to protect consumers enforced. Marketing campaigns should be more inclusive to involve diverse social groups and reflect generally-accepted social norms. Originality/value - This study reveals that, while general attitudes toward advertising may be similar, attitudes toward the institution and instruments of advertising may differ even in countries with geographic proximity and low cultural distance. © Emerald Group Publishing Limited.
Resumo:
Distributed source coding (DSC) has recently been considered as an efficient approach to data compression in wireless sensor networks (WSN). Using this coding method multiple sensor nodes compress their correlated observations without inter-node communications. Therefore energy and bandwidth can be efficiently saved. In this paper, we investigate a randombinning based DSC scheme for remote source estimation in WSN and its performance of estimated signal to distortion ratio (SDR). With the introduction of a detailed power consumption model for wireless sensor communications, we quantitatively analyze the overall network energy consumption of the DSC scheme. We further propose a novel energy-aware transmission protocol for the DSC scheme, which flexibly optimizes the DSC performance in terms of either SDR or energy consumption, by adapting the source coding and transmission parameters to the network conditions. Simulations validate the energy efficiency of the proposed adaptive transmission protocol. © 2007 IEEE.
Resumo:
AMS Subj. Classification: H.3.7 Digital Libraries, K.6.5 Security and Protection
Resumo:
Fibre-to-the-premises (FTTP) has been long sought as the ultimate solution to satisfy the demand for broadband access in the foreseeable future, and offer distance-independent data rate within access network reach. However, currently deployed FTTP networks have in most cases only replaced the transmission medium, without improving the overall architecture, resulting in deployments that are only cost efficient in densely populated areas (effectively increasing the digital divide). In addition, the large potential increase in access capacity cannot be matched by a similar increase in core capacity at competitive cost, effectively moving the bottleneck from access to core. DISCUS is a European Integrated Project that, building on optical-centric solutions such as Long-Reach Passive Optical access and flat optical core, aims to deliver a cost-effective architecture for ubiquitous broadband services. One of the key features of the project is the end-to-end approach, which promises to deliver a complete network design and a conclusive analysis of its economic viability. © 2013 IEEE.
Resumo:
The great amount of data generated as the result of the automation and process supervision in industry implies in two problems: a big demand of storage in discs and the difficulty in streaming this data through a telecommunications link. The lossy data compression algorithms were born in the 90’s with the goal of solving these problems and, by consequence, industries started to use those algorithms in industrial supervision systems to compress data in real time. These algorithms were projected to eliminate redundant and undesired information in a efficient and simple way. However, those algorithms parameters must be set for each process variable, becoming impracticable to configure this parameters for each variable in case of systems that monitor thousands of them. In that context, this paper propose the algorithm Adaptive Swinging Door Trending that consists in a adaptation of the Swinging Door Trending, as this main parameters are adjusted dynamically by the analysis of the signal tendencies in real time. It’s also proposed a comparative analysis of performance in lossy data compression algorithms applied on time series process variables and dynamometer cards. The algorithms used to compare were the piecewise linear and the transforms.
Resumo:
Smartphones have undergone a remarkable evolution over the last few years, from simple calling devices to full fledged computing devices where multiple services and applications run concurrently. Unfortunately, battery capacity increases at much slower pace, resulting as a main bottleneck for Internet connected smartphones. Several software-based techniques have been proposed in the literature for improving the battery life. Most common techniques include data compression, packet aggregation or batch scheduling, offloading partial computations to cloud, switching OFF interfaces (e.g., WiFi or 3G/4G) periodically for short intervals etc. However, there has been no focus on eliminating the energy waste of background applications that extensively utilize smartphone resources such as CPU, memory, GPS, WiFi, 3G/4G data connection etc. In this paper, we propose an Application State Proxy (ASP) that suppresses/stops the applications on smartphones and maintains their presence on any other network device. The applications are resumed/restarted on smartphones only in case of any event, such as a new message arrival. In this paper, we present the key requirements for the ASP service and different possible architectural designs. In short, the ASP concept can significantly improve the battery life of smartphones, by reducing to maximum extent the usage of its resources due to background applications.
Resumo:
Microsecond long Molecular Dynamics (MD) trajectories of biomolecular processes are now possible due to advances in computer technology. Soon, trajectories long enough to probe dynamics over many milliseconds will become available. Since these timescales match the physiological timescales over which many small proteins fold, all atom MD simulations of protein folding are now becoming popular. To distill features of such large folding trajectories, we must develop methods that can both compress trajectory data to enable visualization, and that can yield themselves to further analysis, such as the finding of collective coordinates and reduction of the dynamics. Conventionally, clustering has been the most popular MD trajectory analysis technique, followed by principal component analysis (PCA). Simple clustering used in MD trajectory analysis suffers from various serious drawbacks, namely, (i) it is not data driven, (ii) it is unstable to noise and change in cutoff parameters, and (iii) since it does not take into account interrelationships amongst data points, the separation of data into clusters can often be artificial. Usually, partitions generated by clustering techniques are validated visually, but such validation is not possible for MD trajectories of protein folding, as the underlying structural transitions are not well understood. Rigorous cluster validation techniques may be adapted, but it is more crucial to reduce the dimensions in which MD trajectories reside, while still preserving their salient features. PCA has often been used for dimension reduction and while it is computationally inexpensive, being a linear method, it does not achieve good data compression. In this thesis, I propose a different method, a nonmetric multidimensional scaling (nMDS) technique, which achieves superior data compression by virtue of being nonlinear, and also provides a clear insight into the structural processes underlying MD trajectories. I illustrate the capabilities of nMDS by analyzing three complete villin headpiece folding and six norleucine mutant (NLE) folding trajectories simulated by Freddolino and Schulten [1]. Using these trajectories, I make comparisons between nMDS, PCA and clustering to demonstrate the superiority of nMDS. The three villin headpiece trajectories showed great structural heterogeneity. Apart from a few trivial features like early formation of secondary structure, no commonalities between trajectories were found. There were no units of residues or atoms found moving in concert across the trajectories. A flipping transition, corresponding to the flipping of helix 1 relative to the plane formed by helices 2 and 3 was observed towards the end of the folding process in all trajectories, when nearly all native contacts had been formed. However, the transition occurred through a different series of steps in all trajectories, indicating that it may not be a common transition in villin folding. The trajectories showed competition between local structure formation/hydrophobic collapse and global structure formation in all trajectories. Our analysis on the NLE trajectories confirms the notion that a tight hydrophobic core inhibits correct 3-D rearrangement. Only one of the six NLE trajectories folded, and it showed no flipping transition. All the other trajectories get trapped in hydrophobically collapsed states. The NLE residues were found to be buried deeply into the core, compared to the corresponding lysines in the villin headpiece, thereby making the core tighter and harder to undo for 3-D rearrangement. Our results suggest that the NLE may not be a fast folder as experiments suggest. The tightness of the hydrophobic core may be a very important factor in the folding of larger proteins. It is likely that chaperones like GroEL act to undo the tight hydrophobic core of proteins, after most secondary structure elements have been formed, so that global rearrangement is easier. I conclude by presenting facts about chaperone-protein complexes and propose further directions for the study of protein folding.
Resumo:
In this article we describe a semantic localization dataset for indoor environments named ViDRILO. The dataset provides five sequences of frames acquired with a mobile robot in two similar office buildings under different lighting conditions. Each frame consists of a point cloud representation of the scene and a perspective image. The frames in the dataset are annotated with the semantic category of the scene, but also with the presence or absence of a list of predefined objects appearing in the scene. In addition to the frames and annotations, the dataset is distributed with a set of tools for its use in both place classification and object recognition tasks. The large number of labeled frames in conjunction with the annotation scheme make this dataset different from existing ones. The ViDRILO dataset is released for use as a benchmark for different problems such as multimodal place classification and object recognition, 3D reconstruction or point cloud data compression.
Resumo:
Sequence problems belong to the most challenging interdisciplinary topics of the actuality. They are ubiquitous in science and daily life and occur, for example, in form of DNA sequences encoding all information of an organism, as a text (natural or formal) or in form of a computer program. Therefore, sequence problems occur in many variations in computational biology (drug development), coding theory, data compression, quantitative and computational linguistics (e.g. machine translation). In recent years appeared some proposals to formulate sequence problems like the closest string problem (CSP) and the farthest string problem (FSP) as an Integer Linear Programming Problem (ILPP). In the present talk we present a general novel approach to reduce the size of the ILPP by grouping isomorphous columns of the string matrix together. The approach is of practical use, since the solution of sequence problems is very time consuming, in particular when the sequences are long.
Resumo:
In these last years a great effort has been put in the development of new techniques for automatic object classification, also due to the consequences in many applications such as medical imaging or driverless cars. To this end, several mathematical models have been developed from logistic regression to neural networks. A crucial aspect of these so called classification algorithms is the use of algebraic tools to represent and approximate the input data. In this thesis, we examine two different models for image classification based on a particular tensor decomposition named Tensor-Train (TT) decomposition. The use of tensor approaches preserves the multidimensional structure of the data and the neighboring relations among pixels. Furthermore the Tensor-Train, differently from other tensor decompositions, does not suffer from the curse of dimensionality making it an extremely powerful strategy when dealing with high-dimensional data. It also allows data compression when combined with truncation strategies that reduce memory requirements without spoiling classification performance. The first model we propose is based on a direct decomposition of the database by means of the TT decomposition to find basis vectors used to classify a new object. The second model is a tensor dictionary learning model, based on the TT decomposition where the terms of the decomposition are estimated using a proximal alternating linearized minimization algorithm with a spectral stepsize.
Resumo:
National Highway Traffic Safety Administration, Accident Investigation Division, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Accident Investigation Division, Washington, D.C.