997 resultados para Lecture Performance


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over time, XML markup language has acquired a considerable importance in applications development, standards definition and in the representation of large volumes of data, such as databases. Today, processing XML documents in a short period of time is a critical activity in a large range of applications, which imposes choosing the most appropriate mechanism to parse XML documents quickly and efficiently. When using a programming language for XML processing, such as Java, it becomes necessary to use effective mechanisms, e.g. APIs, which allow reading and processing of large documents in appropriated manners. This paper presents a performance study of the main existing Java APIs that deal with XML documents, in order to identify the most suitable one for processing large XML files.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a general consensus that in a competitive business environment, firms’ performance will depend on their capacity to innovate. To clarifying how, when and to what extent innovation affects the market and financial performance of firms, the authors deploy seemingly unrelated regression equation model to examine innovation in over 500 Portuguese firms from 1998 to 2004. The results confirm, as theorists have frequently assumed, that innovation positively affects firms’ performance; but they also suggest that the reverse is true, a result that is less intuitively obvious, given the complexity of the innovation process and local, national and global competitive environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper seeks to investigate the effectiveness of sea-defense structures in preventing/reducing the tsunami overtopping as well as evaluating the resulting tsunami impact at El Jadida, Morocco. Different tsunami wave conditions are generated by considering various earthquake scenarios of magnitudes ranging from M-w = 8.0 to M-w = 8.6. These scenarios represent the main active earthquake faults in the SW Iberia margin and are consistent with two past events that generated tsunamis along the Atlantic coast of Morocco. The behavior of incident tsunami waves when interacting with coastal infrastructures is analyzed on the basis of numerical simulations of near-shore tsunami waves' propagation. Tsunami impact at the affected site is assessed through computing inundation and current velocity using a high-resolution digital terrain model that incorporates bathymetric, topographic and coastal structures data. Results, in terms of near-shore tsunami propagation snapshots, waves' interaction with coastal barriers, and spatial distributions of flow depths and speeds, are presented and discussed in light of what was observed during the 2011 Tohoku-oki tsunami. Predicted results show different levels of impact that different tsunami wave conditions could generate in the region. Existing coastal barriers around the El Jadida harbour succeeded in reflecting relatively small waves generated by some scenarios, but failed in preventing the overtopping caused by waves from others. Considering the scenario highly impacting the El Jadida coast, significant inundations are computed at the sandy beach and unprotected areas. The modeled dramatic tsunami impact in the region shows the need for additional tsunami standards not only for sea-defense structures but also for the coastal dwellings and houses to provide potential in-place evacuation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In general, modern networks are analysed by taking several Key Performance Indicators (KPIs) into account, their proper balance being required in order to guarantee a desired Quality of Service (QoS), particularly, cellular wireless heterogeneous networks. A model to integrate a set of KPIs into a single one is presented, by using a Cost Function that includes these KPIs, providing for each network node a single evaluation parameter as output, and reflecting network conditions and common radio resource management strategies performance. The proposed model enables the implementation of different network management policies, by manipulating KPIs according to users' or operators' perspectives, allowing for a better QoS. Results show that different policies can in fact be established, with a different impact on the network, e.g., with median values ranging by a factor higher than two.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Low-density parity-check (LDPC) codes are nowadays one of the hottest topics in coding theory, notably due to their advantages in terms of bit error rate performance and low complexity. In order to exploit the potential of the Wyner-Ziv coding paradigm, practical distributed video coding (DVC) schemes should use powerful error correcting codes with near-capacity performance. In this paper, new ways to design LDPC codes for the DVC paradigm are proposed and studied. The new LDPC solutions rely on merging parity-check nodes, which corresponds to reduce the number of rows in the parity-check matrix. This allows to change gracefully the compression ratio of the source (DCT coefficient bitplane) according to the correlation between the original and the side information. The proposed LDPC codes reach a good performance for a wide range of source correlations and achieve a better RD performance when compared to the popular turbo codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: A major focus of data mining process - especially machine learning researches - is to automatically learn to recognize complex patterns and help to take the adequate decisions strictly based on the acquired data. Since imaging techniques like MPI – Myocardial Perfusion Imaging on Nuclear Cardiology, can implicate a huge part of the daily workflow and generate gigabytes of data, there could be advantages on Computerized Analysis of data over Human Analysis: shorter time, homogeneity and consistency, automatic recording of analysis results, relatively inexpensive, etc.Objectives: The aim of this study relates with the evaluation of the efficacy of this methodology on the evaluation of MPI Stress studies and the process of decision taking concerning the continuation – or not – of the evaluation of each patient. It has been pursued has an objective to automatically classify a patient test in one of three groups: “Positive”, “Negative” and “Indeterminate”. “Positive” would directly follow to the Rest test part of the exam, the “Negative” would be directly exempted from continuation and only the “Indeterminate” group would deserve the clinician analysis, so allowing economy of clinician’s effort, increasing workflow fluidity at the technologist’s level and probably sparing time to patients. Methods: WEKA v3.6.2 open source software was used to make a comparative analysis of three WEKA algorithms (“OneR”, “J48” and “Naïve Bayes”) - on a retrospective study using the comparison with correspondent clinical results as reference, signed by nuclear cardiologist experts - on “SPECT Heart Dataset”, available on University of California – Irvine, at the Machine Learning Repository. For evaluation purposes, criteria as “Precision”, “Incorrectly Classified Instances” and “Receiver Operating Characteristics (ROC) Areas” were considered. Results: The interpretation of the data suggests that the Naïve Bayes algorithm has the best performance among the three previously selected algorithms. Conclusions: It is believed - and apparently supported by the findings - that machine learning algorithms could significantly assist, at an intermediary level, on the analysis of scintigraphic data obtained on MPI, namely after Stress acquisition, so eventually increasing efficiency of the entire system and potentially easing both roles of Technologists and Nuclear Cardiologists. In the actual continuation of this study, it is planned to use more patient information and significantly increase the population under study, in order to allow improving system accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de Mestrado, Ciências Económicas e Empresariais, 17 de Junho de 2015, Universidade dos Açores.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Controlo e Gestão de Negócios

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To analyze the scoring obtained by an instrument, which evaluates the ability to read and understand items in the health care setting, according to education and age. METHODS: The short version of the Test of Functional Health Literacy in Adults was administered to 312 healthy participants of different ages and years of schooling. The study was conducted between 2006 and 2007, in the city of São Paulo, Southeastern Brazil. The test includes actual materials such as pill bottles and appointment slips and measures reading comprehension, assessing the ability to read and correctly pronounce a list of words and understand both prose passages and numerical information. Pearson partial correlations and a multiple regression model were used to verify the association between its scores and education and age. RESULTS: The mean age of the sample was 47.3 years(sd=16.8) and the mean education was 9.7 years(sd=5; range: 1 - 17). A total of 32.4% of the sample showed literacy/numeracy deficits, scoring in the inadequate and marginal functional health literacy ranges. Among the elderly (65 years or older) this rate increased to 51.6%. There was a positive correlation between schooling and scores (r=0.74; p<0.01) and a negative correlation between age and the scores (r=-0.259; p<0.01). The correlation between the scores and age was not significant when the effects of education were held constant (rp=-0.031, p=0.584). A significant association (B=3.877, Beta =0.733; p<0.001) was found between schooling and scores. Age was not a significant predictor in this model (B=-0.035, Beta=-0.22; p=0.584). CONCLUSIONS: The short version of the Test of Functional Health Literacy in Adults was a suitable tool to assess health literacy in the study population. The high number of individuals classified as functional illiterates in this test highlights the importance of special assistance to help them properly understand directions for healthcare.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Contabilidade e Gestão das Instituições Financeiras

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new high performance architecture for the computation of all the DCT operations adopted in the H.264/AVC and HEVC standards is proposed in this paper. Contrasting to other dedicated transform cores, the presented multi-standard transform architecture is supported on a completely configurable, scalable and unified structure, that is able to compute not only the forward and the inverse 8×8 and 4×4 integer DCTs and the 4×4 and 2×2 Hadamard transforms defined in the H.264/AVC standard, but also the 4×4, 8×8, 16×16 and 32×32 integer transforms adopted in HEVC. Experimental results obtained using a Xilinx Virtex-7 FPGA demonstrated the superior performance and hardware efficiency levels provided by the proposed structure, which outperforms its more prominent related designs by at least 1.8 times. When integrated in a multi-core embedded system, this architecture allows the computation, in real-time, of all the transforms mentioned above for resolutions as high as the 8k Ultra High Definition Television (UHDTV) (7680×4320 @ 30fps).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Linear unmixing decomposes a hyperspectral image into a collection of reflectance spectra of the materials present in the scene, called endmember signatures, and the corresponding abundance fractions at each pixel in a spatial area of interest. This paper introduces a new unmixing method, called Dependent Component Analysis (DECA), which overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical properties of hyperspectral data. DECA models the abundance fractions as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. The performance of the method is illustrated using simulated and real data.