43 resultados para Two-Sided Matching
em Instituto Politécnico do Porto, Portugal
Resumo:
In this paper we present the operational matrices of the left Caputo fractional derivative, right Caputo fractional derivative and Riemann–Liouville fractional integral for shifted Legendre polynomials. We develop an accurate numerical algorithm to solve the two-sided space–time fractional advection–dispersion equation (FADE) based on a spectral shifted Legendre tau (SLT) method in combination with the derived shifted Legendre operational matrices. The fractional derivatives are described in the Caputo sense. We propose a spectral SLT method, both in temporal and spatial discretizations for the two-sided space–time FADE. This technique reduces the two-sided space–time FADE to a system of algebraic equations that simplifies the problem. Numerical results carried out to confirm the spectral accuracy and efficiency of the proposed algorithm. By selecting relatively few Legendre polynomial degrees, we are able to get very accurate approximations, demonstrating the utility of the new approach over other numerical methods.
Resumo:
No decorrer dos últimos anos, os agentes (inteligentes) de software foram empregues como um método para colmatar as dificuldades associadas com a gestão, partilha e reutilização de um crescente volume de informação, enquanto as ontologias foram utilizadas para modelar essa mesma informação num formato semanticamente explícito e rico. À medida que a popularidade da Web Semântica aumenta e cada vez informação é partilhada sob a forma de ontologias, o problema de integração desta informação amplifica-se. Em semelhante contexto, não é expectável que dois agentes que pretendam cooperar utilizem a mesma ontologia para descrever a sua conceptualização do mundo. Inclusive pode revelar-se necessário que agentes interajam sem terem conhecimento prévio das ontologias utilizadas pelos restantes, sendo necessário que as conciliem em tempo de execução num processo comummente designado por Mapeamento de Ontologias [1]. O processo de mapeamento de ontologias é normalmente oferecido como um serviço aos agentes de negócio, podendo ser requisitado sempre que seja necessário produzir um alinhamento. No entanto, tendo em conta que cada agente tem as suas próprias necessidades e objetivos, assim como a própria natureza subjetiva das ontologias que utilizam, é possível que tenham diferentes interesses relativamente ao processo de alinhamento e que, inclusive, recorram aos serviços de mapeamento que considerem mais convenientes [1]. Diferentes matchers podem produzir resultados distintos e até mesmo contraditórios, criando-se assim conflitos entre os agentes. É necessário que se proceda então a uma tentativa de resolução dos conflitos existentes através de um processo de negociação, de tal forma que os agentes possam chegar a um consenso relativamente às correspondências que devem ser utilizadas na tradução de mensagens a trocar. A resolução de conflitos é considerada uma métrica de grande importância no que diz respeito ao processo de negociação [2]: considera-se que existe uma maior confiança associada a um alinhamento quanto menor o número de conflitos por resolver no processo de negociação que o gerou. Desta forma, um alinhamento com um número elevado de conflitos por resolver apresenta uma confiança menor que o mesmo alinhamento associado a um número elevado de conflitos resolvidos. O processo de negociação para que dois ou mais agentes gerem e concordem com um alinhamento é denominado de Negociação de Mapeamentos de Ontologias. À data existem duas abordagens propostas na literatura: (i) baseadas em Argumentação (e.g. [3] [4]) e (ii) baseadas em Relaxamento [5] [6]. Cada uma das propostas expostas apresenta um número de vantagens e limitações. Foram propostas várias formas de combinação das duas técnicas [2], com o objetivo de beneficiar das vantagens oferecidas e colmatar as suas limitações. No entanto, à data, não são conhecidas experiências documentadas que possam provar tal afirmação e, como tal, não é possível atestar que tais combinações tragam, de facto, o benefício que pretendem. O trabalho aqui apresentado pretende providenciar tais experiências e verificar se a afirmação de melhorias em relação aos resultados das técnicas individuais se mantém. Com o objetivo de permitir a combinação e de colmatar as falhas identificadas, foi proposta uma nova abordagem baseada em Relaxamento, que é posteriormente combinada com as abordagens baseadas em Argumentação. Os seus resultados, juntamente com os da combinação, são aqui apresentados e discutidos, sendo possível identificar diferenças nos resultados gerados por combinações diferentes e possíveis contextos de utilização.
Resumo:
Int’l J. of Information and Communication Technology Education, 3(2), 1-14, April-June 2007
Resumo:
A tese estrutura-se em dois ensaios versando temas distintos, se bem que entre eles se possam perceber algumas afinidades decorrentes do facto de ambos se subsumirem à análise de diferentes tipos de investimento em capital humano: a formação profissional e a formação académica superior. No primeiro ensaio, aborda-se a questão da avaliação do impacto de diferentes tipos de formação profissional sobre os salários, a estabilidade da relação contratual trabalhador-empregador e a empregabilidade, em Portugal, por recurso a uma metodologia de estimação semiparamétrica, mais especificamente, através de uma metodologia de enlaçamento baseado em índices de propensão aplicada aos dados do Inquérito ao Emprego do INE, relativos aos anos de 1998 a 2001. Quanto aos impactos salariais, conclui-se que a formação obtida nas empresas será a mais compensadora, mas os restantes tipos de formação também propiciarão ganhos salariais, sendo que a formação obtida nas escolas ou centros de formação profissional será aquela com efeitos menos expressivos. Quanto ao efeito sobre a empregabilidade, as estimativas obtidas apontam para a conclusão de que a formação profissional potenciará o abandono da inactividade, mas não garantidamente o emprego, verificando-se mesmo que a formação recebida nas escolas e centros de formação profissional conduzirá, mais provavelmente, ao desemprego, se bem que, para uma certa fracção de desempregados, o sentido da causalidade possa ser inverso. O segundo ensaio versa a decomposição, da média condicional e por quantis, do diferencial salarial entre homens e mulheres específico do universo dos diplomados do ensino superior, em Portugal (dados do 1.º Inquérito de Percurso aos Diplomados do Ensino Superior realizado em 2001), por forma a apurar o grau de discriminação por género nele indiciado. Usando a metodologia de Machado-Mata e, em alternativa, a metodologia de enlaçamento baseado em índices de propensão, dir-se-ia que, no sector público, a discriminação salarial por género, a existir, será reduzida, i.e. o diferencial salarial observado explicar se á quase integralmente pelas diferenças entre os atributos produtivos dos homens e das mulheres. Diferentemente, no sector empresarial, a discriminação é potencialmente ponderosa. Especial atenção é dedicada ao contributo da área de formação escolar para a explicação do diferencial salarial.
Resumo:
The idiomatic expression “In Rome be a Roman” can be applied to leadership training and development as well. Leaders who can act as role models inspire other future leaders in their behaviour, attitudes and ways of thinking. Based on two examples of current leaders in the fields of Politics and Public Administration, I support the idea that exposure to role models during their training was decisive for their career paths and current activities as prominent characters in their profession. Issues such as how students should be prepared for community or national leadership as well as cross-cultural engagement are raised here. The hypothesis of transculturalism and cross-cultural commitment as a factor of leadership is presented. Based on current literature on Leadership as well as the presented case studies, I expect to raise a debate focusing on strategies for improving leaders’ training in their cross-cultural awareness.
Resumo:
Objectives : The purpose of this article is to find out differences between surveys using paper and online questionnaires. The author has deep knowledge in the case of questions concerning opinions in the development of survey based research, e.g. the limits of postal and online questionnaires. Methods : In the physician studies carried out in 1995 (doctors graduated in 1982-1991), 2000 (doctors graduated in 1982-1996), 2005 (doctors graduated in 1982-2001), 2011 (doctors graduated in 1977-2006) and 457 family doctors in 2000, were used paper and online questionnaires. The response rates were 64%, 68%, 64%, 49% and 73%, respectively. Results : The results of the physician studies showed that there were differences between methods. These differences were connected with using paper-based questionnaire and online questionnaire and response rate. The online-based survey gave a lower response rate than the postal survey. The major advantages of online survey were short response time; very low financial resource needs and data were directly loaded in the data analysis software, thus saved time and resources associated with the data entry process. Conclusions : The current article helps researchers with planning the study design and choosing of the right data collection method.
Resumo:
The post-surgical period is often critical for infection acquisition. The combination of patient injury and environmental exposure through breached skin add risk to pre-existing conditions such as drug or depressed immunity. Several factors such as the period of hospital staying after surgery, base disease, age, immune system condition, hygiene policies, careless prophylactic drug administration and physical conditions of the healthcare centre may contribute to the acquisition of a nosocomial infection. A purulent wound can become complicated whenever antimicrobial therapy becomes compromised. In this pilot study, we analysed Enterobacteriaceae strains, the most significant gram-negative rods that may occur in post-surgical skin and soft tissue infections (SSTI) presenting reduced β-lactam susceptibility and those presenting extended-spectrum β-lactamases (ESBL). There is little information in our country regarding the relationship between β-lactam susceptibility, ESBL and development of resistant strains of microorganisms in SSTI. Our main results indicate Escherichia coli and Klebsiella spp. are among the most frequent enterobacteria (46% and 30% respectively) with ESBL production in 72% of Enterobacteriaceae isolates from SSTI. Moreover, coinfection occurred extensively, mainly with Pseudomonas aeruginosa and Methicillin-resistant Staphylococcus aureus (18% and 13%, respectively). These results suggest future research to explore if and how these associations are involved in the development of antibiotic resistance.
Resumo:
Introduction: Image resizing is a normal feature incorporated into the Nuclear Medicine digital imaging. Upsampling is done by manufacturers to adequately fit more the acquired images on the display screen and it is applied when there is a need to increase - or decrease - the total number of pixels. This paper pretends to compare the “hqnx” and the “nxSaI” magnification algorithms with two interpolation algorithms – “nearest neighbor” and “bicubic interpolation” – in the image upsampling operations. Material and Methods: Three distinct Nuclear Medicine images were enlarged 2 and 4 times with the different digital image resizing algorithms (nearest neighbor, bicubic interpolation nxSaI and hqnx). To evaluate the pixel’s changes between the different output images, 3D whole image plot profiles and surface plots were used as an addition to the visual approach in the 4x upsampled images. Results: In the 2x enlarged images the visual differences were not so noteworthy. Although, it was clearly noticed that bicubic interpolation presented the best results. In the 4x enlarged images the differences were significant, with the bicubic interpolated images presenting the best results. Hqnx resized images presented better quality than 4xSaI and nearest neighbor interpolated images, however, its intense “halo effect” affects greatly the definition and boundaries of the image contents. Conclusion: The hqnx and the nxSaI algorithms were designed for images with clear edges and so its use in Nuclear Medicine images is obviously inadequate. Bicubic interpolation seems, from the algorithms studied, the most suitable and its each day wider applications seem to show it, being assumed as a multi-image type efficient algorithm.
Resumo:
Introduction: Although relative uptake values aren’t the most important objective of a 99mTc-DMSA scan, they are important quantitative information. In most of the dynamic renal scintigraphies attenuation correction is essential if one wants to obtain a reliable result of the quantification process. Although in DMSA scans the absent of significant background and the lesser attenuation in pediatric patients, makes that this attenuation correction techniques are actually not applied. The geometric mean is the most common method, but that includes the acquisition of an anterior (extra) projection, which it is not acquired by a large number of NM departments. This method and the attenuation factors proposed by Tonnesen will be correlated with the absence of attenuation correction procedures. Material and Methods: Images from 20 individuals (aged 3 years +/- 2) were used and the two attenuation correction methods applied. The mean time of acquisition (time post DMSA administration) was 3.5 hours +/- 0.8h. Results: The absence of attenuation correction showed a good correlation with both attenuation methods (r=0.73 +/- 0.11) and the mean difference verified on the uptake values between the different methods were 4 +/- 3. The correlation was higher when the age was lower. The attenuation correction methods correlation was higher between them two than with the “no attenuation correction” method (r=0.82 +/- 0.8), and the mean differences of the uptake values were 2 +/- 2. Conclusion: The decision of not doing any kind of attenuation correction method can be justified by the minor differences verified on the relative kidney uptake values. Nevertheless, if it is recognized that there is a need for an accurate value of the relative kidney uptake, then an attenuation correction method should be used. Attenuation correction factors proposed by Tonnesen can be easily implemented and so become a practical and easy to implement alternative, namely when the anterior projection - needed for the geometric mean methodology – is not acquired.
Resumo:
The occurrence of OTA in fresh and packed wheat and in maize bread and the evaluation of the exposure degree through their consumption in two Portuguese populations from Porto and Coimbra, during the winter of 2007, were studied. One hundred and sixty eight bread samples, 61 maize and 107 wheat, were analysed by liquid chromatography–fluorescence detection (LC–FD). The results showed that 84% of samples were contaminated, with a maximum level of 3.85 ng/g (above the EU maximum limit, 3 ng/g). Fresh wheat bread presented higher levels than packed wheat bread. Moreover, the traditional maize bread, in either city, was consistently more contaminated than wheat bread, 0.25 vs 0.19 ng/g, and 0.48 vs 0.34 ng/ g for Porto and Coimbra, respectively. Avintes maize bread showed the highest mean contamination and maximum levels. The higher estimated daily intake of OTA from both types of bread in the population of Coimbra compared to Porto reflects the higher average contamination of bread in the first city.
Resumo:
A preliminary version of this paper appeared in Proceedings of the 31st IEEE Real-Time Systems Symposium, 2010, pp. 239–248.
Resumo:
Consider the problem of determining a task-toprocessor assignment for a given collection of implicit-deadline sporadic tasks upon a multiprocessor platform in which there are two distinct kinds of processors. We propose a polynomialtime approximation scheme (PTAS) for this problem. It offers the following guarantee: for a given task set and a given platform, if there exists a feasible task-to-processor assignment, then given an input parameter, ϵ, our PTAS succeeds, in polynomial time, in finding such a feasible task-to-processor assignment on a platform in which each processor is 1+3ϵ times faster. In the simulations, our PTAS outperforms the state-of-the-art PTAS [1] and also for the vast majority of task sets, it requires significantly smaller processor speedup than (its upper bound of) 1+3ϵ for successfully determining a feasible task-to-processor assignment.
Resumo:
Consider the problem of assigning real-time tasks on a heterogeneous multiprocessor platform comprising two different types of processors — such a platform is referred to as two-type platform. We present two linearithmic timecomplexity algorithms, SA and SA-P, each providing the follow- ing guarantee. For a given two-type platform and a given task set, if there exists a feasible task-to-processor-type assignment such that tasks can be scheduled to meet deadlines by allowing them to migrate only between processors of the same type, then (i) using SA, it is guaranteed to find such a feasible task-to- processor-type assignment where the same restriction on task migration applies but given a platform in which processors are 1+α/2 times faster and (ii) SA-P succeeds in finding 2 a feasible task-to-processor assignment where tasks are not allowed to migrate between processors but given a platform in which processors are 1+α/times faster, where 0<α≤1. The parameter α is a property of the task set — it is the maximum utilization of any task which is less than or equal to 1.
Resumo:
Consider a single processor and a software system. The software system comprises components and interfaces where each component has an associated interface and each component comprises a set of constrained-deadline sporadic tasks. A scheduling algorithm (called global scheduler) determines at each instant which component is active. The active component uses another scheduling algorithm (called local scheduler) to determine which task is selected for execution on the processor. The interface of a component makes certain information about a component visible to other components; the interfaces of all components are used for schedulability analysis. We address the problem of generating an interface for a component based on the tasks inside the component. We desire to (i) incur only a small loss in schedulability analysis due to the interface and (ii) ensure that the amount of space (counted in bits) of the interface is small; this is because such an interface hides as much details of the component as possible. We present an algorithm for generating such an interface.
Resumo:
Consider the problem of scheduling a set of implicit-deadline sporadic tasks to meet all deadlines on a two-type heterogeneous multiprocessor platform where a task may request at most one of |R| shared resources. There are m1 processors of type-1 and m2 processors of type-2. Tasks may migrate only when requesting or releasing resources. We present a new algorithm, FF-3C-vpr, which offers a guarantee that if a task set is schedulable to meet deadlines by an optimal task assignment scheme that only allows tasks to migrate when requesting or releasing a resource, then FF-3Cvpr also meets deadlines if given processors 4+6*ceil(|R|/min(m1,m2)) times as fast. As far as we know, it is the first result for resource sharing on heterogeneous platforms with provable performance.