990 resultados para P(x)-laplacian Problem
Resumo:
X-ray fluoroscopy is essential in both diagnosis and medical intervention, although it may contribute to significant radiation doses to patients that have to be optimised and justified. Therefore, it is crucial to the patient to be exposed to the lowest achievable dose without compromising the image quality. The purpose of this study was to perform an analysis of the quality control measurements, particularly dose rates, contrast and spatial resolution of Portuguese fluoroscopy equipment and also to provide a contribution to the establishment of reference levels for the equipment performance parameters. Measurements carried out between 2007 and 2013 on 143 fluoroscopy equipment distributed by 34 nationwide health units were analysed. The measurements suggest that image quality and dose rates of Portuguese equipment are congruent with other studies, and in general, they are as per the Portuguese law. However, there is still a possibility of improvements intending optimisation at a national level.
Resumo:
This work describes the synthesis and characterization of a series of new α-diimine and P,O, β-keto and acetamide phosphines ligands, and their complexation to Ni(II), Co(II),Co(III) and Pd(II) to obtain a series of new compounds aiming to study their structural characteristics and to test their catalytic activity. All the compounds synthesized were characterized by the usual spectroscopic and spectrometric techniques: Elemental Analysis, MALDI-TOF-MS spectrometry, IR, UV-vis, 1H, 13C and 31P NMR spectroscopies. Some of the paramagnetic compounds were also characterized by EPR. For the majority of the compounds it was possible to solve their solid state structure by single crystal X-ray diffraction. Tests for olefin polymerization were performed in order to determine the catalytic activity of the Co(II) complexes. Chapter I presents a brief introduction to homogenous catalysis, highlighting the reactions catalyzed by the type of compounds described in this thesis, namely olefin polymerization and oligomerization and reactions catalyzed by the complexes bearing α-diimines and P,O type ligands. Chapter II is dedicated to the description of the synthesis of new α-diimines cobalt (II) complexes, of general formula [CoX2(α-diimine)], where X = Cl or I and the α-diimines are bis(aryl)acenaphthenequinonediimine) (Ar-BIAN) and 1,4-diaryl-2,3-dimethyl-1,4-diaza-1,3-butadiene (Ar-DAB). Structures solved by single crystal X-ray diffraction were obtained for all the described complexes. For some of the compounds, X-band EPR measurements were performed on polycrystalline samples, showing a high-spin Co(II) (S = 3/2) ion, in a distorted axial environment. EPR single crystal experiments on two of the compounds allowed us to determine the g tensor orientation in the molecular structure. In Chapter III we continue with the synthesis and characterization of more cobalt (II)complexes bearing α-diimines of general formula [CoX2(α-diimine)], with X = Cl or I and α-diimines are bis(aryl)acenaphthenequinonediimine) (Ar-BIAN) and 1,4-diaryl-2,3-dimethyl- 1,4-diaza-1,3-butadiene (Ar-DAB). The structures of three of the new compounds synthesized were determined by single crystal X-ray diffraction. A NMR paramagnetic characterization of all the compounds described is presented. Ethylene polymerization tests were done to determine the catalytic activity of several of the Co(II) complexes described in Chapter II and III and their results are shown. In Chapter IV a new rigid bidentate ligand, bis(1-naphthylimino)acenaphthene, and its complexes with Zn(II) and Pd(II), were synthesized. Both the ligand and its complexes show syn and anti isomers. Structures of the ligand and the anti isomer of the Pd(II) complex were solved by single crystal X-ray diffraction. All the compounds were characterized by elemental analysis, MALDI-TOF-MS spectrometry, and by IR, UV-vis, 1H, 13C, 1H-1H COSY, 1H-13C HSQC, 1H-13C HSQC-TOCSY and 1H-1H NOESY NMR when necessary. DFT studies showed that both conformers of [PdCl2(BIAN)] are isoenergetics and can be obtain experimentally. However, we can predict that the isomerization process is not available in square-planar complex, but is possible for the free ligand. The molecular geometry is very similar in both isomers, and only different orientations for naphthyl groups can be expected. Chapter V describes the synthesis of new P, O type ligands, β-keto phosphine, R2PCH2C(O)Ph, and acetamide phosphine R2PNHC(O)Me, as well as a series of new cobalt(III) complexes namely [(η5-C5H5)CoI2{Ph2PCH2C(O)Ph}], and [(η5- C5H5)CoI2{Ph2PNHC(O)Me}]. Treating these Co(III) compounds with an excess of Et3N, resulted in complexes η2-phosphinoenolate [(η5-C5H5)CoI{Ph2PCH…C(…O)Ph}] and η2- acetamide phosphine [(η5-C5H5)CoI{Ph2PN…C(…O)Me}]. Nickel (II) complexes were also obtained: cis-[Ni(Ph2PN…C(…O)Me)2] and cis-[Ni((i-Pr)2PN…C(…O)Me)2]. Their geometry and isomerism were discussed. Seven structures of the compounds described in this chapter were determined by single crystal X-ray diffraction. The general conclusions of this work can be found in Chapter VI.
Resumo:
Purpose: To compare image quality and effective dose when the 10 kVp rule is applied with manual and AEC mode in PA chest X-ray. Methods and Materials: A total of 68 images (with and without lesions) were acquired of an anthropomorphic chest phantom in a Wolverson Arcoma X-ray unit. The images were evaluated against a reference image using image quality criteria and the 2 alternative forced choice (2 AFC) method by five radiographers. The effective dose was calculated using PCXMC software using the exposure parameters and DAP. The exposure index (lgM) was recorded. Results: Exposure time decreases considerably when applying the 10 kVp rule in manual mode (50%-28%) compared to AEC mode (36%-23%). Statistical differences for effective dose between several AEC modes were found (p=0.002). The effective dose is lower when using only the right AEC ionization chamber. Considering image quality, there are no statistical differences (p=0.348) between the different AEC modes for images with no lesions. Using a higher kVp value the lgM values will also increase. The lgM values showed significant statistical differences (p=0.000). The image quality scores did not present statistically significant differences (p=0.043) for the images with lesions when comparing manual with AEC modes. Conclusion: In general, the dose is lower in the manual mode. By using the right AEC ionising chamber the effective dose will be the lowest in comparison to other ionising chambers. The use of the 10 kVp rule did not affect the detectability of the lesions.
Resumo:
The process of resources systems selection takes an important part in Distributed/Agile/Virtual Enterprises (D/A/V Es) integration. However, the resources systems selection is still a difficult matter to solve in a D/A/VE, as it is pointed out in this paper. Globally, we can say that the selection problem has been equated from different aspects, originating different kinds of models/algorithms to solve it. In order to assist the development of a web prototype tool (broker tool), intelligent and flexible, that integrates all the selection model activities and tools, and with the capacity to adequate to each D/A/V E project or instance (this is the major goal of our final project), we intend in this paper to show: a formulation of a kind of resources selection problem and the limitations of the algorithms proposed to solve it. We formulate a particular case of the problem as an integer programming, which is solved using simplex and branch and bound algorithms, and identify their performance limitations (in terms of processing time) based on simulation results. These limitations depend on the number of processing tasks and on the number of pre-selected resources per processing tasks, defining the domain of applicability of the algorithms for the problem studied. The limitations detected open the necessity of the application of other kind of algorithms (approximate solution algorithms) outside the domain of applicability founded for the algorithms simulated. However, for a broker tool it is very important the knowledge of algorithms limitations, in order to, based on problem features, develop and select the most suitable algorithm that guarantees a good performance.
Resumo:
Purpose: To investigate whether standard X-ray acquisition factors for orbital radiographs are suitable for the detection of ferromagnetic intra-ocular foreign bodies in patients undergoing MRI. Method: 35 observers, at varied levels of education in radiography, attending a European Dose Optimisation EURASMUS Summer School were asked to score 24 images of varying acquisition factors against a clinical standard (reference image) using two alternative forced choice. The observers were provided with 12 questions and a 5 point Likert scale. Statistical tests were used to validate the scale, and scale reliability was also measured. The images which scored equal to, or better than, the reference image (36) were ranked alongside their corresponding effective dose (E), the image with the lowest dose equal to or better than the reference is considered the new optimum acquisition factors. Results: Four images emerged as equal to, or better than, the reference in terms of image quality. The images were then ranked in order of E. Only one image that scored the same as the reference had a lower dose. The reference image had a mean E of 3.31μSv, the image that scored the same had an E of 1.8μSv. Conclusion: Against the current clinical standard exposure factors of 70kVp, 20mAs and the use of an anti- scatter grid, one image proved to have a lower E whilst maintaining the same level of image quality and lesion visibility. It is suggested that the new exposure factors should be 60kVp, 20mAs and still include the use of an anti-scatter grid.
Resumo:
The experimental model of paracoccidioidomycosis induced in mice by the intravenous injection of yeast-forms of P. brasiliensis (Bt2 strain; 1 x 10(6) viable fungi/animal) was used to evaluate sequentially 2, 4, 8, 16 and 20 weeks after inoculation: 1. The presence of immunoglobulins and C3 in the pulmonary granuloma-ta, by direct immunofluorescence; 2. The humoral (immunodiffusion test) and the cellular (footpad sweeling test) immune response; 3. The histopathology of lesions. The cell-immune response was positive since week 2, showing a transitory depression at week 16. Specific antibodies were first detected at week 4 and peaked at week 16. At histology, epithelioid granulomas with numerous fungi and polymorphonuclear agreggates were seen. The lungs showed progressive involvement up to week 16, with little decrease at week 20. From week 2 on, there were deposits of IgG and C3 around fungal walls within the granulomas and IgG stained cells among the mononuclear cell peripheral halo. Interstitital immunoglobulins and C3 deposits in the granulomas were not letected. IgG and C3 seen to play an early an important role in. the host defenses against P. brasiliensis by possibly cooperating in the killing of parasites and blocking the antigenic diffusion.
Resumo:
The container loading problem (CLP) is a combinatorial optimization problem for the spatial arrangement of cargo inside containers so as to maximize the usage of space. The algorithms for this problem are of limited practical applicability if real-world constraints are not considered, one of the most important of which is deemed to be stability. This paper addresses static stability, as opposed to dynamic stability, looking at the stability of the cargo during container loading. This paper proposes two algorithms. The first is a static stability algorithm based on static mechanical equilibrium conditions that can be used as a stability evaluation function embedded in CLP algorithms (e.g. constructive heuristics, metaheuristics). The second proposed algorithm is a physical packing sequence algorithm that, given a container loading arrangement, generates the actual sequence by which each box is placed inside the container, considering static stability and loading operation efficiency constraints.
Resumo:
This paper starts by introducing the Grünwald–Letnikov derivative, the Riesz potential and the problem of generalizing the Laplacian. Based on these ideas, the generalizations of the Laplacian for 1D and 2D cases are studied. It is presented as a fractional version of the Cauchy–Riemann conditions and, finally, it is discussed with the n-dimensional Laplacian.
Resumo:
X-Ray Spectrom. 2003; 32: 396–401
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Heterogeneous multicore platforms are becoming an interesting alternative for embedded computing systems with limited power supply as they can execute specific tasks in an efficient manner. Nonetheless, one of the main challenges of such platforms consists of optimising the energy consumption in the presence of temporal constraints. This paper addresses the problem of task-to-core allocation onto heterogeneous multicore platforms such that the overall energy consumption of the system is minimised. To this end, we propose a two-phase approach that considers both dynamic and leakage energy consumption: (i) the first phase allocates tasks to the cores such that the dynamic energy consumption is reduced; (ii) the second phase refines the allocation performed in the first phase in order to achieve better sleep states by trading off the dynamic energy consumption with the reduction in leakage energy consumption. This hybrid approach considers core frequency set-points, tasks energy consumption and sleep states of the cores to reduce the energy consumption of the system. Major value has been placed on a realistic power model which increases the practical relevance of the proposed approach. Finally, extensive simulations have been carried out to demonstrate the effectiveness of the proposed algorithm. In the best-case, savings up to 18% of energy are reached over the first fit algorithm, which has shown, in previous works, to perform better than other bin-packing heuristics for the target heterogeneous multicore platform.
Resumo:
This paper starts by introducing the Grünwald–Letnikov derivative, the Riesz potential and the problem of generalizing the Laplacian. Based on these ideas, the generalizations of the Laplacian for 1D and 2D cases are studied. It is presented as a fractional version of the Cauchy–Riemann conditions and, finally, it is discussed with the n-dimensional Laplacian.
Resumo:
Article in Press, Corrected Proof
Resumo:
Chronic Low Back Pain (CLBP) is a public health problem and older women have higher incidence of this symptom, which affect body balance, functional capacity and behavior. The purpose of this study was to verifying the effect of exercises with Nintendo Wii on CLBP, functional capacity and mood of elderly. Thirty older women (68 ± 4 years; 68 ± 12 kg; 154 ± 5 cm) with CLBP participated in this study. Elderly individuals were divided into a Control Exercise Group (n = 14) and an Experimental Wii Group (n = 16). Control Exercise Group did strength exercises and core training, while Experimental Wii Group did ones additionally to exercises with Wii. CLBP, balance, functional capacity and mood were assessed pre and post training by the numeric pain scale, Wii Balance Board, sit to stand test and Profile of Mood States, respectively. Training lasted eight weeks and sessions were performed three times weekly. MANOVA 2 x 2 showed no interaction on pain, siting, stand-up and mood (P = 0.53). However, there was significant difference within groups (P = 0.0001). ANOVA 2 x 2 showed no interaction for each variable (P > 0.05). However, there were significant differences within groups in these variables (P < 0.05). Tukey's post-hoc test showed significant difference in pain on both groups (P = 0.0001). Wilcoxon and Mann-Whitney tests identified no significant differences on balance (P > 0.01). Capacity to Sit improved only in Experimental Wii Group (P = 0.04). In conclusion, physical exercises with Nintendo Wii Fit Plus additional to strength and core training were effective only for sitting capacity, but effect size was small.
Resumo:
The Container Loading Problem (CLP) literature has traditionally evaluated the dynamic stability of cargo by applying two metrics to box arrangements: the mean number of boxes supporting the items excluding those placed directly on the floor (M1) and the percentage of boxes with insufficient lateral support (M2). However, these metrics, that aim to be proxies for cargo stability during transportation, fail to translate real-world cargo conditions of dynamic stability. In this paper two new performance indicators are proposed to evaluate the dynamic stability of cargo arrangements: the number of fallen boxes (NFB) and the number of boxes within the Damage Boundary Curve fragility test (NB_DBC). Using 1500 solutions for well-known problem instances found in the literature, these new performance indicators are evaluated using a physics simulation tool (StableCargo), replacing the real-world transportation by a truck with a simulation of the dynamic behaviour of container loading arrangements. Two new dynamic stability metrics that can be integrated within any container loading algorithm are also proposed. The metrics are analytical models of the proposed stability performance indicators, computed by multiple linear regression. Pearson’s r correlation coefficient was used as an evaluation parameter for the performance of the models. The extensive computational results show that the proposed metrics are better proxies for dynamic stability in the CLP than the previous widely used metrics.