997 resultados para Interfaccia, Javascript, jQuery, Ajax, JSP, Servlet
Resumo:
La ONG Intermon Oxfam organitza anualment una cursa solidària anomenada "Trailwalker" en la que una sèrie d'equips han de completar un recorregut de 100km en menys de 32h. S'ha dissenyat i implementat una solució de seguiment dels equips en cursa amb la creació d'una aplicació per dispositius Android que realitzen el posicionament a través de GPS o servei d'ubicació de Google, i envien les dades a un servidor central amb programari GIS ArcGIS, el qual s'utilitza per mostrar les dades en un visor HTML que utilitza l'API Javascript del mateix.
Resumo:
El proyecto consiste en el desarrollo de una aplicación multiplataforma capaz de ejecutarse en multitud de sistemas operativos de distintos fabricantes de dispositivos móviles, como son teléfonos inteligentes y tabletas. La aplicación se basa en un calendario conectado al campus de la UOC.
Resumo:
This article originates from a panel with the above title, held at IEEE VTC Spring 2009, in which the authors took part. The enthusiastic response it received prompted us to discuss for a wider audience whether research at the physical layer (PHY) is still relevant to the field of wireless communications. Using cellular systems as the axis of our exposition, we exemplify areas where PHY research has indeed hit a performance wall and where any improvements are expected to be marginal. We then discuss whether the research directions taken in the past have always been the right choice and how lessons learned could influence future policy decisions. Several of the raised issues are subsequently discussed in greater details, e.g., the growing divergence between academia and industry. With this argumentation at hand, we identify areas that are either under-developed or likely to be of impact in coming years - hence corroborating the relevance and importance of PHY research.
Resumo:
The optimization of the pilot overhead in single-user wireless fading channels is investigated, and the dependence of this overhead on various system parameters of interest (e.g., fading rate, signal-to-noise ratio) is quantified. The achievable pilot-based spectral efficiency is expanded with respect to the fading rate about the no-fading point, which leads to an accurate order expansion for the pilot overhead. This expansion identifies that the pilot overhead, as well as the spectral efficiency penalty with respect to a reference system with genie-aided CSI (channel state information) at the receiver, depend on the square root of the normalized Doppler frequency. It is also shown that the widely-used block fading model is a special case of more accurate continuous fading models in terms of the achievable pilot-based spectral efficiency. Furthermore, it is established that the overhead optimization for multiantenna systems is effectively the same as for single-antenna systems with the normalized Doppler frequency multiplied by the number of transmit antennas.
Resumo:
Demosaicking is a particular case of interpolation problems where, from a scalar image in which each pixel has either the red, the green or the blue component, we want to interpolate the full-color image. State-of-the-art demosaicking algorithms perform interpolation along edges, but these edges are estimated locally. We propose a level-set-based geometric method to estimate image edges, inspired by the image in-painting literature. This method has a time complexity of O(S) , where S is the number of pixels in the image, and compares favorably with the state-of-the-art algorithms both visually and in most relevant image quality measures.
Resumo:
Tone Mapping is the problem of compressing the range of a High-Dynamic Range image so that it can be displayed in a Low-Dynamic Range screen, without losing or introducing novel details: The final image should produce in the observer a sensation as close as possible to the perception produced by the real-world scene. We propose a tone mapping operator with two stages. The first stage is a global method that implements visual adaptation, based on experiments on human perception, in particular we point out the importance of cone saturation. The second stage performs local contrast enhancement, based on a variational model inspired by color vision phenomenology. We evaluate this method with a metric validated by psychophysical experiments and, in terms of this metric, our method compares very well with the state of the art.
Resumo:
Intuitively, music has both predictable and unpredictable components. In this work we assess this qualitative statement in a quantitative way using common time series models fitted to state-of-the-art music descriptors. These descriptors cover different musical facets and are extracted from a large collection of real audio recordings comprising a variety of musical genres. Our findings show that music descriptor time series exhibit a certain predictability not only for short time intervals, but also for mid-term and relatively long intervals. This fact is observed independently of the descriptor, musical facet and time series model we consider. Moreover, we show that our findings are not only of theoretical relevance but can also have practical impact. To this end we demonstrate that music predictability at relatively long time intervals can be exploited in a real-world application, namely the automatic identification of cover songs (i.e. different renditions or versions of the same musical piece). Importantly, this prediction strategy yields a parameter-free approach for cover song identification that is substantially faster, allows for reduced computational storage and still maintains highly competitive accuracies when compared to state-of-the-art systems.
Resumo:
Winter maintenance, particularly snow removal and the stress of snow removal materials on public structures, is an enormous budgetary burden on municipalities and nongovernmental maintenance organizations in cold climates. Lately, geospatial technologies such as remote sensing, geographic information systems (GIS), and decision support tools are roviding a valuable tool for planning snow removal operations. A few researchers recently used geospatial technologies to develop winter maintenance tools. However, most of these winter maintenance tools, while having the potential to address some of these information needs, are not typically placed in the hands of planners and other interested stakeholders. Most tools are not constructed with a nontechnical user in mind and lack an easyto-use, easily understood interface. A major goal of this project was to implement a web-based Winter Maintenance Decision Support System (WMDSS) that enhances the capacity of stakeholders (city/county planners, resource managers, transportation personnel, citizens, and policy makers) to evaluate different procedures for managing snow removal assets optimally. This was accomplished by integrating geospatial analytical techniques (GIS and remote sensing), the existing snow removal asset management system, and webbased spatial decision support systems. The web-based system was implemented using the ESRI ArcIMS ActiveX Connector and related web technologies, such as Active Server Pages, JavaScript, HTML, and XML. The expert knowledge on snow removal procedures is gathered and integrated into the system in the form of encoded business rules using Visual Rule Studio. The system developed not only manages the resources but also provides expert advice to assist complex decision making, such as routing, optimal resource allocation, and monitoring live weather information. This system was developed in collaboration with Black Hawk County, IA, the city of Columbia, MO, and the Iowa Department of transportation. This product was also demonstrated for these agencies to improve the usability and applicability of the system.
Resumo:
This paper describes a Computer-Supported Collaborative Learning (CSCL) case study in engineering education carried out within the context of a network management course. The case study shows that the use of two computing tools developed by the authors and based on Free- and Open-Source Software (FOSS) provide significant educational benefits over traditional engineering pedagogical approaches in terms of both concepts and engineering competencies acquisition. First, the Collage authoring tool guides and supports the course teacher in the process of authoring computer-interpretable representations (using the IMS Learning Design standard notation) of effective collaborative pedagogical designs. Besides, the Gridcole system supports the enactment of that design by guiding the students throughout the prescribed sequence of learning activities. The paper introduces the goals and context of the case study, elaborates onhow Collage and Gridcole were employed, describes the applied evaluation methodology, anddiscusses the most significant findings derived from the case study.
Resumo:
The alignment between competences, teaching-learning methodologies and assessment is a key element of the European Higher Education Area. This paper presents the efforts carried out by six Telematics, Computer Science and Electronic Engineering Education teachers towards achieving this alignment in their subjects. In a joint work with pedagogues, a set of recommended actions were identified. A selection of these actions were applied and evaluated in the six subjects. The cross-analysis of the results indicate that the actions allow students to better understand the methodologies and assessment planned for the subjects, facilitate (self-) regulation and increase students’ involvement in the subjects.
Resumo:
This paper presents a technique to estimate and model patient-specific pulsatility of cerebral aneurysms over onecardiac cycle, using 3D rotational X-ray angiography (3DRA) acquisitions. Aneurysm pulsation is modeled as a time varying-spline tensor field representing the deformation applied to a reference volume image, thus producing the instantaneousmorphology at each time point in the cardiac cycle. The estimated deformation is obtained by matching multiple simulated projections of the deforming volume to their corresponding original projections. A weighting scheme is introduced to account for the relevance of each original projection for the selected time point. The wide coverage of the projections, together with the weighting scheme, ensures motion consistency in all directions. The technique has been tested on digital and physical phantoms that are realistic and clinically relevant in terms of geometry, pulsation and imaging conditions. Results from digital phantomexperiments demonstrate that the proposed technique is able to recover subvoxel pulsation with an error lower than 10% of the maximum pulsation in most cases. The experiments with the physical phantom allowed demonstrating the feasibility of pulsation estimation as well as identifying different pulsation regions under clinical conditions.
Resumo:
The increasing volume of data describing humandisease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the@neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system’s architecture is generic enough that it could be adapted to the treatment of other diseases.Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers cliniciansthe tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medicalresearchers gain access to a critical mass of aneurysm related data due to the system’s ability to federate distributed informationsources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access andwork on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand forperforming computationally intensive simulations for treatment planning and research.
Resumo:
A new multimodal biometric database designed and acquired within the framework of the European BioSecure Network of Excellence is presented. It is comprised of more than 600 individuals acquired simultaneously in three scenarios: 1) over the Internet, 2) in an office environment with desktop PC, and 3) in indoor/outdoor environments with mobile portable hardware. The three scenarios include a common part of audio/video data. Also, signature and fingerprint data have been acquired both with desktop PC and mobile portable hardware. Additionally, hand and iris data were acquired in the second scenario using desktop PC. Acquisition has been conducted by 11 European institutions. Additional features of the BioSecure Multimodal Database (BMDB) are: two acquisitionsessions, several sensors in certain modalities, balanced gender and age distributions, multimodal realistic scenarios with simple and quick tasks per modality, cross-European diversity, availability of demographic data, and compatibility with other multimodal databases. The novel acquisition conditions of the BMDB allow us to perform new challenging research and evaluation of eithermonomodal or multimodal biometric systems, as in the recent BioSecure Multimodal Evaluation campaign. A description of this campaign including baseline results of individual modalities from the new database is also given. The database is expected to beavailable for research purposes through the BioSecure Association during 2008.
Resumo:
In a distributed key distribution scheme, a set of servers helps a set of users in a group to securely obtain a common key. Security means that an adversary who corrupts some servers and some users has no information about the key of a noncorrupted group. In this work, we formalize the security analysis of one such scheme which was not considered in the original proposal. We prove the scheme is secure in the random oracle model, assuming that the Decisional Diffie-Hellman (DDH) problem is hard to solve. We also detail a possible modification of that scheme and the one in which allows us to prove the security of the schemes without assuming that a specific hash function behaves as a random oracle. As usual, this improvement in the security of the schemes is at the cost of an efficiency loss.
Resumo:
Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.