839 resultados para Class-based isolation vs. sharing
Resumo:
Sandpit exploitation near Lisbon allowed collecting of many Miocene, non marine fossils. These sands are part of the mostly marine Miocene series in the Lower Tagus basin. The particularly favourable situation led several researchers to deal with marine-continental correlations. Difficulties often concern methodologic aspects. Some poorly based interpretations exerced a lasting influence. A critical approach is presented. Analysis requires data. Methods based upon models often lead to the temptation of fitting data in order to confirm a priori conclusions, or of mixing up data as if of equal statistic value while they have not at all the same weight. Erroneous interpretations' uncritical repetition for many years "upgraded" them into absolute truth. Another point is endemism vs. europeism. Miocene mammals from Lisbon compared well with corresponding French, contemporaneous taxa, while this was apparently not true for Spanish ones. Too much accent had been put on the endemic character of Spanish, or even regional, mammalian faunas. Nationalist bias and sensationalism also weigh, albeit negatively. Meanwhile nearly all the more evident examples as the rhinoceros Hispanotherium are discredited as Iberian endemisms. Taxa may appear as endemic just because they have not yet been found elsewhere. At least for the medium to large-sized mammals, with their huge geographic distribution, faunal differences depend much more on ecology, climate and environmental conditions. Emphasis on differences may also result from researchers that are often in a precarious situation and need very much to achieve short-term, preferably sensational results. Overvalued differences may mask real similarities. Unethic and not scientific behaviour are further enhanced by "nomina nuda" tricks that may simply be a way to circunvent or cheat the Priority Rule. On the other hand, access to communication networks may present as sensational novelties items that are not new at all, misleading the audience. A new class of "science people" arose, created by the media and not by the value of their real achievements. Discussion is presented on sedimentation processes and discontinuities that are often regarded as absolute precision dating tools, as well as on some geochemical and paleomagnetic interpretations. A very good chronologie frame has been obtained for the basin under study on the basis of an impressive set of data, providing a rather detailed and accurate frame for Miocene marine-continental correlations.
Resumo:
Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.
Resumo:
Consider the problem of scheduling a task set τ of implicit-deadline sporadic tasks to meet all deadlines on a t-type heterogeneous multiprocessor platform where tasks may access multiple shared resources. The multiprocessor platform has m k processors of type-k, where k∈{1,2,…,t}. The execution time of a task depends on the type of processor on which it executes. The set of shared resources is denoted by R. For each task τ i , there is a resource set R i ⊆R such that for each job of τ i , during one phase of its execution, the job requests to hold the resource set R i exclusively with the interpretation that (i) the job makes a single request to hold all the resources in the resource set R i and (ii) at all times, when a job of τ i holds R i , no other job holds any resource in R i . Each job of task τ i may request the resource set R i at most once during its execution. A job is allowed to migrate when it requests a resource set and when it releases the resource set but a job is not allowed to migrate at other times. Our goal is to design a scheduling algorithm for this problem and prove its performance. We propose an algorithm, LP-EE-vpr, which offers the guarantee that if an implicit-deadline sporadic task set is schedulable on a t-type heterogeneous multiprocessor platform by an optimal scheduling algorithm that allows a job to migrate only when it requests or releases a resource set, then our algorithm also meets the deadlines with the same restriction on job migration, if given processors 4×(1+MAXP×⌈|P|×MAXPmin{m1,m2,…,mt}⌉) times as fast. (Here MAXP and |P| are computed based on the resource sets that tasks request.) For the special case that each task requests at most one resource, the bound of LP-EE-vpr collapses to 4×(1+⌈|R|min{m1,m2,…,mt}⌉). To the best of our knowledge, LP-EE-vpr is the first algorithm with proven performance guarantee for real-time scheduling of sporadic tasks with resource sharing on t-type heterogeneous multiprocessors.
Resumo:
European Master’s Degree in Human Rights and Democratisation Academic Year 2005/2006
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica
Resumo:
The aim is to examine the temporal trends of hip fracture incidence in Portugal by sex and age groups, and explore the relation with anti-osteoporotic medication. From the National Hospital Discharge Database, we selected from 1st January 2000 to 31st December 2008, 77,083 hospital admissions (77.4% women) caused by osteoporotic hip fractures (low energy, patients over 49 years-age), with diagnosis codes 820.x of ICD 9-CM. The 2001 Portuguese population was used as standard to calculate direct age-standardized incidence rates (ASIR) (100,000 inhabitants). Generalized additive and linear models were used to evaluate and quantify temporal trends of age specific rates (AR), by sex. We identified 2003 as a turning point in the trend of ASIR of hip fractures in women. After 2003, the ASIR in women decreased on average by 10.3 cases/100,000 inhabitants, 95% CI (− 15.7 to − 4.8), per 100,000 anti-osteoporotic medication packages sold. For women aged 65–69 and 75–79 we identified the same turning point. However, for women aged over 80, the year 2004 marked a change in the trend, from an increase to a decrease. Among the population aged 70–74 a linear decrease of incidence rate (95% CI) was observed in both sexes, higher for women: − 28.0% (− 36.2 to − 19.5) change vs − 18.8%, (− 32.6 to − 2.3). The abrupt turning point in the trend of ASIR of hip fractures in women is compatible with an intervention, such as a medication. The trends were different according to gender and age group, but compatible with the pattern of bisphosphonates sales.
Resumo:
Despite the abundant literature in knowledge management, few empirical studies have explored knowledge management in connection with international assignees. This phenomenon has a special relevance in the Portuguese context, since (a) there are no empirical studies concerning this issue that involves international Portuguese companies; (b) the national business reality is incipient as far as internationalisation is concerned, and; (c) the organisational and national culture presents characteristics that are distinctive from the most highly studied contexts (e.g., Asia, USA, Scandinavian countries, Spain, France, The Netherlands, Germany, England and Russia). We examine the role of expatriates in transfer and knowledge sharing within the Portuguese companies with operations abroad. We focus specifically on expatriates’ role on knowledge sharing connected to international Portuguese companies and our findings take into account organizational representatives’ and expatriates’ perspectives. Using a comparative case study approach, we examine how three main dimensions influence the role of expatriates in knowledge sharing among headquarters and their subsidiaries (types of international assignment, reasons for using expatriation and international assignment characteristics). Data were collected using semi‐structured interviews to 30 Portuguese repatriates and 14 organizational representatives from seven Portuguese companies. The findings suggest that the reasons that lead Portuguese companies to expatriating employees are connected to: (1) business expansion needs; (2) control of international operations and; (3) transfer and knowledge sharing. Our study also shows that Portuguese companies use international assignments in order to positively respond to the increasingly decaying domestic market in the economic areas in which they operate. Evidence also reveals that expatriation is seen as a strategy to fulfill main organizational objectives through their expatriates (e.g., business internationalization, improvement of the coordination and control level of the units/subsidiaries abroad, replication of aspects of the home base, development and incorporation of new organizational techniques and processes). We also conclude that Portuguese companies have developed an International Human Resources Management strategy, based on an ethnocentric approach, typically associated with companies in early stages of internationalization, i.e., the authority and decision making are centered in the home base. Expatriates have a central role in transmitting culture and technical knowledge from company’s headquarters to the company’s branches. Based on the findings, the article will discuss in detail the main theoretical and managerial implications. Suggestions for further research will also be presented.
Resumo:
IEEE 802.11 is one of the most well-established and widely used standard for wireless LAN. Its Medium Access control (MAC) layer assumes that the devices adhere to the standard’s rules and timers to assure fair access and sharing of the medium. However, wireless cards driver flexibility and configurability make it possible for selfish misbehaving nodes to take advantages over the other well-behaving nodes. The existence of selfish nodes degrades the QoS for the other devices in the network and may increase their energy consumption. In this paper we propose a green solution for selfish misbehavior detection in IEEE 802.11-based wireless networks. The proposed scheme works in two phases: Global phase which detects whether the network contains selfish nodes or not, and Local phase which identifies which node or nodes within the network are selfish. Usually, the network must be frequently examined for selfish nodes during its operation since any node may act selfishly. Our solution is green in the sense that it saves the network resources as it avoids wasting the nodes energy by examining all the individual nodes of being selfish when it is not necessary. The proposed detection algorithm is evaluated using extensive OPNET simulations. The results show that the Global network metric clearly indicates the existence of a selfish node while the Local nodes metric successfully identified the selfish node(s). We also provide mathematical analysis for the selfish misbehaving and derived formulas for the successful channel access probability.
Resumo:
8th International Workshop on Multiple Access Communications (MACOM2015), Helsinki, Finland.
Resumo:
Work in Progress Session, 21st IEEE Real-Time and Embedded Techonology and Applications Symposium (RTAS 2015). 13 to 16, Apr, 2015, pp 27-28. Seattle, U.S.A..
Resumo:
INTRODUCTION: Conventional risk stratification after acute myocardial infarction is usually based on the extent of myocardial damage and its clinical consequences. However, nowadays, more aggressive therapeutic strategies are used, both pharmacological and invasive, with the aim of changing the course of the disease. OBJECTIVES: To evaluate whether the number of drugs administered can influence survival of these patients, based on recent clinical trials that demonstrated the benefit of each drug for survival after acute coronary events. METHODS: This was a retrospective analysis of 368 consecutive patients admitted to our ICU during 2002 for acute coronary syndrome. A score from 1 to 4 was attributed to each patient according to the number of secondary prevention drugs administered--antiplatelets, beta blockers, angiotensin-converting enzyme inhibitors and statins--independently of the type of association. We evaluated mortality at 30-day follow-up. RESULTS: Mean age was 65 +/- 13 years, 68% were male, and 43% had ST-segment elevation acute myocardial infarction. Thirty-day mortality for score 1 to 4 was 36.8%, 15.6%, 7.8% and 2.5% respectively (p < 0.001). The use of only one or two drugs resulted in a significant increase in the risk of death at 30 days (OR 4.10, 95% CI 1.69-9.93, p = 0.002), when corrected for other variables. There was a 77% risk reduction associated with the use of three or four vs. one or two drugs. The other independent predictors of death were diabetes, Killip class on admission and renal insufficiency. CONCLUSIONS: The use of a greater number of secondary prevention drugs in patients with acute coronary syndromes was associated with improved survival. A score of 4 was a powerful predictor of mortality at 30-day follow-up
Resumo:
The aim of this paper is to present the main Portuguese results from a multi-national study on reading format preferences and behaviors from undergraduate students from Polytechnic Institute of Porto (Portugal). For this purpose we apply an adaptation of the Academic Reading Questionnaire previously created by Mizrachi (2014). This survey instrument has 14 Likert-style statements regarding the format influence in the students reading behavior, including aspects such as ability to remember, feelings about access convenience, active engagement with the text by highlighting and annotating, and ability to review and concentrate on the text. The importance of the language and dimension of the text to determine the preference format is also inquired. Students are also asked about the electronic device they use to read digital documents. Finally, some demographic and academic data were gathered. The analysis of the results will be contextualized on a review of the literature concerning youngsters reading format preferences. The format (digital or print) in which a text is displayed and read can impact comprehension, which is an important information literacy skill. This is a quite relevant issue for class readings in academic context because it impacts learning. On the other hand, students preferences on reading formats will influence the use of library services. However, literature is not unanimous on this subject. Woody, Daniel and Baker (2010) concluded that the experience of reading is not the same in electronic or print context and that students prefer print books than e-books. This thesis is reinforced by Ji, Michaels and Waterman (2014) which report that among 101 undergraduates the large majority self-reported to read and learn more when they use printed format despite the fact that they prefer electronically supplied readings instead of those supplied in printed form. On the other side, Rockinson-Szapkiw, et al (2013) conducted a study were they demonstrate that e-textbook is as effective for learning as the traditional textbook and that students who choose e-textbook had significantly higher perceived learning than students who chose to use print textbooks.
Resumo:
Introduction. Peritubular capillary complement 4d staining is one of the criteria for the diagnosis of antibody-mediated rejection, and research into this is essential to kidney allograft evaluation. The immunofluorescence technique applied to frozen sections is the present gold-standard method for complement 4d staining and is used routinely in our laboratory. The immunohistochemistry technique applied to paraffin-embedded tissue may be used when no frozen tissue is available. Material and Methods. The aim of this study is to evaluate the sensitivity and specificity of immunohistochemistry compared with immunofluorescence. We describe the advantages and disadvantages of the immunohistochemistry vs. the immunofluorescence technique. For this purpose complement 4d staining was performed retrospectively by the two methods in indication biopsies (n=143) and graded using the Banff 07 classification. Results. There was total classification agreement between methods in 87.4% (125/143) of cases. However, immunohistochemistry staining caused more difficulties in interpretation, due to nonspecific staining in tubular cells and surrounding interstitium. All cases negative by immunofluorescence were also negative by immunohistochemistry. The biopsies were classified as positive in 44.7% (64/143) of cases performed by immunofluorescence vs. 36.4% (52/143) performed by immunohistochemistry. Fewer biopsies were classified as positive diffuse in the immunohistochemistry group(25.1% vs. 31.4%) and more as positive focal (13.2% vs. 11.1%). More cases were classified as negative by immunohistochemistry (63.6% vs. 55.2%). Study by ROC curve showed immunohistochemistry has a specificity of 100% and a sensitivity of 81.2% in relation to immunofluorescence (AUC: 0.906; 95% confidence interval: 0.846-0.949; p=0.0001). Conclusions. The immunohistochemistry method presents an excellent specificity but lower sensitivity to C4d detection in allograft dysfunction. The evaluation is more difficult, requiring a more experienced observer than the immunofluorescence method. Based on these results, we conclude that the immunohistochemistry technique can safely be used when immunofluorescence is not available.
Resumo:
Introduction & Objectives: Several factors may influence the decision to pursue nonsurgical modalities for the treatment of non-melanoma skin cancer. Topical photodynamic therapy (PDT) is a non-invasive alternative treatment reported to have a high efficacy when using standardized protocols in Bowen’s disease (BD), superficial basal cell carcinoma (BCC) and in thin nodular BCC. However, long-term recurrence studies are lacking. The aim of this study was to evaluate the long-term efficacy of PDT with topical methylaminolevulinate (MAL) for the treatment of BD and BCC in a dermato-oncology department. Materials & Methods: All patients with the diagnosis of BD or BCC, treated with MAL-PDT from the years 2004 to 2008, were enrolled. Treatment protocol included two MAL-PDT sessions one week apart repeated at three months when incomplete response, using a red light dose of 37-40 J/cm2 and an exposure time of 8’20’’. Clinical records were retrospectively reviewed, and data regarding age, sex, tumour location, size, treatment outcomes and recurrence were registered. Descriptive analysis was performed using chi square tests, followed by survival analysis with the Kaplan-Meier and Cox regression models. Results: Sixty-eight patients (median age 71.0 years, P25;P75=30;92) with a total of 78 tumours (31 BD, 45 superficial BCC, 2 nodular BCC) and a median tumour size of 5 cm2 were treated. Overall, the median follow-up period was 43.5 months (P25;P75=0;100), and a total recurrence rate of 33.8% was observed (24.4 % for BCC vs. 45.2% for BD). Estimated recurrence rates for BCC and BD were 5.0% vs. 7.4% at 6 months, 23.4% vs. 27.9% at 12 months, and 30.0% vs. 72.4% at 60 months. Both age and diagnosis were independent prognostic factors for recurrence, with significantly higher estimated recurrence rates in patients with BD (p=0.0036) or younger than 58 years old (p=0.039). The risk of recurrence (hazard ratio) was 2.4 times higher in patients with BD compared to superficial BCC (95% CI:1.1-5.3; p=0.033), and 2.8 times higher in patients younger than 58 years old (95% CI:1.2-6.5; p=0.02). Conclusions: In the studied population, estimated recurrence rates are higher than those expected from available literature, possibly due to a longer follow-up period. To the authors’ knowledge there is only one other study with a similar follow-up period, regarding BCC solely. BD, as an in situ squamous cell carcinoma, has a higher tendency to recur than superficial BCC. Despite greater cosmesis, PDT might no be the best treatment option for young patients considering their higher risk of recurrence.