25 resultados para the pay-off method
em Instituto Politécnico do Porto, Portugal
Resumo:
Introduction / Aims: Adopting the important decisions represents a specific task of the manager. An efficient manager takes these decisions during a sistematic process with well-defined elements, each with a precise order. In the pharmaceutical practice and business, in the supply process of the pharmacies, there are situations when the medicine distributors offer a certain discount, but require payment in a shorter period of time. In these cases, the analysis of the offer can be made with the help of the decision tree method, which permits identifying the decision offering the best possible result in a given situation. The aims of the research have been the analysis of the product offers of many different suppliers and the establishing of the most advantageous ways of pharmacy supplying. Material / Methods: There have been studied the general product offers of the following medical stores: A&G Med, Farmanord, Farmexim, Mediplus, Montero and Relad. In the case of medicine offers including a discount, the decision tree method has been applied in order to select the most advantageous offers. The Decision Tree is a management method used in taking the right decisions and it is generally used when one needs to evaluate the decisions that involve a series of stages. The tree diagram is used in order to look for the most efficient means to attain a specific goal. The decision trees are the most probabilistic methods, useful when adopting risk taking decisions. Results: The results of the analysis on the tree diagrams have indicated the fact that purchasing medicines with discount (1%, 10%, 15%) and payment in a shorter time interval (120 days) is more profitable than purchasing without a discount and payment in a longer time interval (160 days). Discussion / Conclusion: Depending on the results of the tree diagram analysis, the pharmacies would purchase from the selected suppliers. The research has shown that the decision tree method represents a valuable work instrument in choosing the best ways for supplying pharmacies and it is very useful to the specialists from the pharmaceutical field, pharmaceutical management, to medicine suppliers, pharmacy practitioners from the community pharmacies and especially to pharmacy managers, chief – pharmacists.
Resumo:
The most common techniques for stress analysis/strength prediction of adhesive joints involve analytical or numerical methods such as the Finite Element Method (FEM). However, the Boundary Element Method (BEM) is an alternative numerical technique that has been successfully applied for the solution of a wide variety of engineering problems. This work evaluates the applicability of the boundary elem ent code BEASY as a design tool to analyze adhesive joints. The linearity of peak shear and peel stresses with the applied displacement is studied and compared between BEASY and the analytical model of Frostig et al., considering a bonded single-lap joint under tensile loading. The BEM results are also compared with FEM in terms of stress distributions. To evaluate the mesh convergence of BEASY, the influence of the mesh refinement on peak shear and peel stress distributions is assessed. Joint stress predictions are carried out numerically in BEASY and ABAQUS®, and analytically by the models of Volkersen, Goland, and Reissner and Frostig et al. The failure loads for each model are compared with experimental results. The preparation, processing, and mesh creation times are compared for all models. BEASY results presented a good agreement with the conventional methods.
Resumo:
The total antioxidant capacity (TAC) of 28 flavoured water samples was assessed by ferric reducing antioxidant potential (FRAP), oxygen radical absorbance capacity (ORAC), trolox equivalent antioxidant capacity (TEAC) and total reactive antioxidant potential (TRAP) methods. It was observed that flavoured waters had higher antioxidant activity than the corresponding natural ones. The observed differences were attributed to flavours, juice and vitamins. Generally, higher TAC contents were obtained on lemon waters and lower values on guava and raspberry flavoured waters. Lower and higher TACs were obtained by TRAP and ORAC method, respectively. Statistical analysis suggested that vitamins and flavours increased the antioxidant content of the commercial waters.
Resumo:
Component joining is typically performed by welding, fastening, or adhesive-bonding. For bonded aerospace applications, adhesives must withstand high-temperatures (200°C or above, depending on the application), which implies their mechanical characterization under identical conditions. The extended finite element method (XFEM) is an enhancement of the finite element method (FEM) that can be used for the strength prediction of bonded structures. This work proposes and validates damage laws for a thin layer of an epoxy adhesive at room temperature (RT), 100, 150, and 200°C using the XFEM. The fracture toughness (G Ic ) and maximum load ( ); in pure tensile loading were defined by testing double-cantilever beam (DCB) and bulk tensile specimens, respectively, which permitted building the damage laws for each temperature. The bulk test results revealed that decreased gradually with the temperature. On the other hand, the value of G Ic of the adhesive, extracted from the DCB data, was shown to be relatively insensitive to temperature up to the glass transition temperature (T g ), while above T g (at 200°C) a great reduction took place. The output of the DCB numerical simulations for the various temperatures showed a good agreement with the experimental results, which validated the obtained data for strength prediction of bonded joints in tension. By the obtained results, the XFEM proved to be an alternative for the accurate strength prediction of bonded structures.
Resumo:
Adhesive-bonding for the unions in multi-component structures is gaining momentum over welding, riveting and fastening. It is vital for the design of bonded structures the availability of accurate damage models, to minimize design costs and time to market. Cohesive Zone Models (CZM’s) have been used for fracture prediction in structures. The eXtended Finite Element Method (XFEM) is a recent improvement of the Finite Element Method (FEM) that relies on traction-separation laws similar to those of CZM’s but it allows the growth of discontinuities within bulk solids along an arbitrary path, by enriching degrees of freedom. This work proposes and validates a damage law to model crack propagation in a thin layer of a structural epoxy adhesive using the XFEM. The fracture toughness in pure mode I (GIc) and tensile cohesive strength (sn0) were defined by Double-Cantilever Beam (DCB) and bulk tensile tests, respectively, which permitted to build the damage law. The XFEM simulations of the DCB tests accurately matched the experimental load-displacement (P-d) curves, which validated the analysis procedure.
Resumo:
This paper proposes a computationally efficient methodology for the optimal location and sizing of static and switched shunt capacitors in large distribution systems. The problem is formulated as the maximization of the savings produced by the reduction in energy losses and the avoided costs due to investment deferral in the expansion of the network. The proposed method selects the nodes to be compensated, as well as the optimal capacitor ratings and their operational characteristics, i.e. fixed or switched. After an appropriate linearization, the optimization problem was formulated as a large-scale mixed-integer linear problem, suitable for being solved by means of a widespread commercial package. Results of the proposed optimizing method are compared with another recent methodology reported in the literature using two test cases: a 15-bus and a 33-bus distribution network. For the both cases tested, the proposed methodology delivers better solutions indicated by higher loss savings, which are achieved with lower amounts of capacitive compensation. The proposed method has also been applied for compensating to an actual large distribution network served by AES-Venezuela in the metropolitan area of Caracas. A convergence time of about 4 seconds after 22298 iterations demonstrates the ability of the proposed methodology for efficiently handling large-scale compensation problems.
Resumo:
In the initial stage of this work, two potentiometric methods were used to determine the salt (sodium chloride) content in bread and dough samples from several cities in the north of Portugal. A reference method (potentiometric precipitation titration) and a newly developed ion-selective chloride electrode (ISE) were applied. Both methods determine the sodium chloride content through the quantification of chloride. To evaluate the accuracy of the ISE, bread and respective dough samples were analyzed by both methods. Statistical analysis (0.05 significance level) indicated that the results of these methods did not differ significantly. Therefore the ISE is an adequate alternative for the determination of chloride in the analyzed samples. To compare the results of these chloride-based methods with a sodium-based method, sodium was quantified in the same samples by a reference method (atomic absorption spectrometry). Significant differences between the results were verified. In several cases the sodium chloride content exceeded the legal limit when the chloride-based methods were used, but when the sodium-based method was applied this was not the case. This could lead to the erroneous application of fines and therefore the authorities should supply additional information regarding the analytical procedure for this particular control.
Resumo:
Introduction: The quantification of th e differential renal function in adults can be difficult due to many factors - on e of the se is the variances in kidney depth and the attenuation related with all the tissue s between the kidney and the camera. Some authors refer that t he lower attenuation i n p ediatric patients makes unnecessary the use of attenuation correction algorithms. This study will com pare the values of differential renal function obtained with and with out attenuation correction techniques . Material and Methods: Images from a group consisting of 15 individuals (aged 3 years +/ - 2) were used and two attenuation correction method s were applied – Tonnesen correction factors and the geometric mean method . The mean time of acquisition (time post 99m Tc - DMSA administration) was 3.5 hours +/ - 0.8h. Results: T he absence of any method of attenuation correction apparently seems to lead to consistent values that seem to correlate well with the ones obtained with the incorporation of methods of attenuation correction . The differences found between the values obtained with and without attenuation correction were not significant. Conclusion: T he decision of not doing any kind of attenuation correction method can apparently be justified by the minor differences verified on the relative kidney uptake values. Nevertheless, if it is recognized that there is a need for a really accurate value of the relative kidney uptake, then an attenuation correction method should be used.
Resumo:
Introduction: Although relative uptake values aren’t the most important objective of a 99mTc-DMSA scan, they are important quantitative information. In most of the dynamic renal scintigraphies attenuation correction is essential if one wants to obtain a reliable result of the quantification process. Although in DMSA scans the absent of significant background and the lesser attenuation in pediatric patients, makes that this attenuation correction techniques are actually not applied. The geometric mean is the most common method, but that includes the acquisition of an anterior (extra) projection, which it is not acquired by a large number of NM departments. This method and the attenuation factors proposed by Tonnesen will be correlated with the absence of attenuation correction procedures. Material and Methods: Images from 20 individuals (aged 3 years +/- 2) were used and the two attenuation correction methods applied. The mean time of acquisition (time post DMSA administration) was 3.5 hours +/- 0.8h. Results: The absence of attenuation correction showed a good correlation with both attenuation methods (r=0.73 +/- 0.11) and the mean difference verified on the uptake values between the different methods were 4 +/- 3. The correlation was higher when the age was lower. The attenuation correction methods correlation was higher between them two than with the “no attenuation correction” method (r=0.82 +/- 0.8), and the mean differences of the uptake values were 2 +/- 2. Conclusion: The decision of not doing any kind of attenuation correction method can be justified by the minor differences verified on the relative kidney uptake values. Nevertheless, if it is recognized that there is a need for an accurate value of the relative kidney uptake, then an attenuation correction method should be used. Attenuation correction factors proposed by Tonnesen can be easily implemented and so become a practical and easy to implement alternative, namely when the anterior projection - needed for the geometric mean methodology – is not acquired.
Resumo:
As aparas de wet-blue e de wet-white constituem um resíduo sólido da indústria de curtumes com um elevado teor em carbono, tornando relevante a sua utilização na preparação de materiais de interesse tecnológico. Este trabalho teve como objectivo a preparação de carvões activados a partir de resíduos da indústria de curtumes. Os métodos de activação utilizados para a preparação dos carvões activados foram a activação física com dióxido de carbono e activação química com hidróxido de potássio. A carbonização dos resíduos foi estudada na gama de temperaturas de 500 ºC a 800 ºC, verificando-se que a sua carbonização se pode considerar completa para a temperatura de 800 ºC. Na activação física os precursores foram previamente carbonizados sob uma atmosfera inerte a 800 ºC e posteriormente activados a 940 ºC usando o CO2 como agente activante. Na etapa de activação variou-se o tempo de activação (20, 40 e 60 minutos) de modo a estudar a influência do grau de queima nas propriedades texturais dos carvões activados. O carvão activado obtido pelo método de activação física com maior área superficial específica foi o carvão preparado a partir do resíduo de wet-blue para um tempo de activação de 40 minutos e grau de queima de 23 % (SBET = 152 m2/g). Para a activação química, procedeu-se à impregnação dos precursores com KOH usando razões mássicas de impregnação KOH:precursor de 0,5:1, 1:1 e 3:1. A impregnação foi efectuada directamente nos resíduos de wet-blue e de wet-white e nos resíduos de wet-blue e de wet-white carbonizados. A activação foi efectuada a 940 ºC sob uma atmosfera inerte, com uma velocidade de aquecimento de 5 ºC/min, e um tempo de activação de 1 hora. No caso da série de carvões activados obtidos por impregnação do precursor, o carvão que exibe melhores propriedades texturais é o carvão activado preparado por impregnação na razão de 1:1 a partir do resíduo de wet-blue (SBET = 1696 m2/g). Na série de carvões activados preparados por impregnação do precursor carbonizado, o carvão com melhores propriedades texturais é o carvão proveniente da impregnação do carbonizado do resíduo de wet-blue na razão de 3:1 (SBET = 1507 m2/g). Os carvões activados obtidos por este método de activação são essencialmente microporosos e com elevada área superficial específica. Testes de adsorção preliminares mostram que estes carvões activados têm um bom desempenho para a remoção de cor de efluentes da indústria de curtumes. Concluiu-se que por activação química com KOH dos resíduos de wet-blue e wet-white se obtêm carvões activados com boas propriedades texturais, elevadas áreas superficiais específicas e elevado volume de microporos, quando comparadas com as dos carvões activados resultantes da activação física. Deste modo, chegou-se à conclusão que ambos os resíduos são bons precursores para a produção de carvão activado, mais propriamente recorrendo à activação química, reduzindo assim o volume de resíduos da indústria de curtumes destinados ao aterro.
Resumo:
Aiming the establishment of simple and accurate readings of citric acid (CA) in complex samples, citrate (CIT) selective electrodes with tubular configuration and polymeric membranes plus a quaternary ammonium ion exchanger were constructed. Several selective membranes were prepared for this purpose, having distinct mediator solvents (with quite different polarities) and, in some cases, p-tert-octylphenol (TOP) as additive. The latter was used regarding a possible increase in selectivity. The general working characteristics of all prepared electrodes were evaluated in a low dispersion flow injection analysis (FIA) manifold by injecting 500µl of citrate standard solutions into an ionic strength (IS) adjuster carrier (10−2 mol l−1) flowing at 3ml min−1. Good potentiometric response, with an average slope and a repeatability of 61.9mV per decade and ±0.8%, respectively, resulted from selective membranes comprising additive and bis(2-ethylhexyl)sebacate (bEHS) as mediator solvent. The same membranes conducted as well to the best selectivity characteristics, assessed by the separated solutions method and for several chemical species, such as chloride, nitrate, ascorbate, glucose, fructose and sucrose. Pharmaceutical preparations, soft drinks and beers were analyzed under conditions that enabled simultaneous pH and ionic strength adjustment (pH = 3.2; ionic strength = 10−2 mol l−1), and the attained results agreed well with the used reference method (relative error < 4%). The above experimental conditions promoted a significant increase in sensitivity of the potentiometric response, with a supra-Nernstian slope of 80.2mV per decade, and allowed the analysis of about 90 samples per hour, with a relative standard deviation <1.0%.
Resumo:
This paper reports on the analysis of tidal breathing patterns measured during noninvasive forced oscillation lung function tests in six individual groups. The three adult groups were healthy, with prediagnosed chronic obstructive pulmonary disease, and with prediagnosed kyphoscoliosis, respectively. The three children groups were healthy, with prediagnosed asthma, and with prediagnosed cystic fibrosis, respectively. The analysis is applied to the pressure–volume curves and the pseudophaseplane loop by means of the box-counting method, which gives a measure of the area within each loop. The objective was to verify if there exists a link between the area of the loops, power-law patterns, and alterations in the respiratory structure with disease. We obtained statistically significant variations between the data sets corresponding to the six groups of patients, showing also the existence of power-law patterns. Our findings support the idea that the respiratory system changes with disease in terms of airway geometry and tissue parameters, leading, in turn, to variations in the fractal dimension of the respiratory tree and its dynamics.
Resumo:
Constrained nonlinear optimization problems are usually solved using penalty or barrier methods combined with unconstrained optimization methods. Another alternative used to solve constrained nonlinear optimization problems is the lters method. Filters method, introduced by Fletcher and Ley er in 2002, have been widely used in several areas of constrained nonlinear optimization. These methods treat optimization problem as bi-objective attempts to minimize the objective function and a continuous function that aggregates the constraint violation functions. Audet and Dennis have presented the rst lters method for derivative-free nonlinear programming, based on pattern search methods. Motivated by this work we have de- veloped a new direct search method, based on simplex methods, for general constrained optimization, that combines the features of the simplex method and lters method. This work presents a new variant of these methods which combines the lters method with other direct search methods and are proposed some alternatives to aggregate the constraint violation functions.
Resumo:
Buildings account for 40% of total energy consumption in the European Union. The reduction of energy consumption in the buildings sector constitute an important measure needed to reduce the Union's energy dependency and greenhouse gas emissions. The Portuguese legislation incorporate this principles in order to regulate the energy performance of buildings. This energy performance should be accompanied by good conditions for the occupants of the buildings. According to EN 15251 (2007) the four factors that affect the occupant comfort in the buildings are: Indoor Air Quality (IAQ), thermal comfort, acoustics and lighting. Ventilation directly affects all except the lighting, so it is crucial to understand the performance of it. The ventilation efficiency concept therefore earn significance, because it is an attempt to quantify a parameter that can easily distinguish the different options for air diffusion in the spaces. The two indicators most internationally accepted are the Air Change Efficiency (ACE) and the Contaminant Removal Effectiveness (CRE). Nowadays with the developed of the Computational Fluid Dynamics (CFD) the behaviour of ventilation can be more easily predicted. Thirteen strategies of air diffusion were measured in a test chamber through the application of the tracer gas method, with the objective to validate the calculation by the MicroFlo module of the IES-VE software for this two indicators. The main conclusions from this work were: that the values of the numerical simulations are in agreement with experimental measurements; the value of the CRE is more dependent of the position of the contamination source, that the strategy used for the air diffusion; the ACE indicator is more appropriate for quantifying the quality of the air diffusion; the solutions to be adopted, to maximize the ventilation efficiency should be, the schemes that operate with low speeds of supply air and small differences between supply air temperature and the room temperature.
Resumo:
This paper reports on the analysis of tidal breathing patterns measured during noninvasive forced oscillation lung function tests in six individual groups. The three adult groups were healthy, with prediagnosed chronic obstructive pulmonary disease, and with prediagnosed kyphoscoliosis, respectively. The three children groups were healthy, with prediagnosed asthma, and with prediagnosed cystic fibrosis, respectively. The analysis is applied to the pressure-volume curves and the pseudophase-plane loop by means of the box-counting method, which gives a measure of the area within each loop. The objective was to verify if there exists a link between the area of the loops, power-law patterns, and alterations in the respiratory structure with disease. We obtained statistically significant variations between the data sets corresponding to the six groups of patients, showing also the existence of power-law patterns. Our findings support the idea that the respiratory system changes with disease in terms of airway geometry and tissue parameters, leading, in turn, to variations in the fractal dimension of the respiratory tree and its dynamics.