955 resultados para Ferritin-Reference value
Resumo:
Acute pancreatitis (AP) is a potential life-threatening disease, which originates from inflammatory involvement of the pancreas and surrounding tissues. Serious complications eventuate and treatment is difficult. AP is classified in both interstitial edematous pancreatitis, which occurs in 70-80% of patients, and necrotizing pancreatitis, which occurs in 20-30% of patients. Diagnosis is based on the presence of two of the following criteria: abdominal pain, increased serum determination of amylase and/or lipase more than three times the reference value, and characteristic tomographic findings. Among the latter, there is the pancreatic and surrounding tissue damage as well as that related to distant organ involvement. This case report shows the fatal case of a male patient with a history of heavy alcoholic abuse admitted with the diagnosis of necrotizing pancreatitis. The authors call attention to the unusual tomographic findings; namely, a huge duodenal hematoma and a large hemoperitoneum, ischemic involvement of the spleen and kidneys, as well as pancreatic and peripancreatic necrosis.
Resumo:
AIMS: To determine whether the current practice of sweat testing in Swiss hospitals is consistent with the current international guidelines. METHODS: A questionnaire was mailed to all children's hospitals (n = 8), regional paediatric sections of general hospitals (n = 28), and all adult pulmonology centres (n = 8) in Switzerland which care for patients with cystic fibrosis (CF). The results were compared with published "guidelines 2000" of the American National Committee for Clinical Laboratory Standards (NCCLS) and the UK guidelines of 2003. RESULTS: The response rate was 89%. All 8 children's hospitals and 18 out of 23 answering paediatric sections performed sweat tests but none of the adult pulmonology centres. In total, 1560 sweat tests (range: 5-200 tests/centre/year, median 40) per year were done. 88% (23/26) were using Wescor systems, 73% (19/26) the Macroduct system for collecting sweat and 31% (8/26) the Nanoduct system. Sweat chloride was determined by only 62% (16/26) of all centres; of these, only 63% (10/16) indicated to use the recommended diagnostic chloride-CF-reference value of >60 mmol/l. Osmolality was measured in 35%, sodium in 42% and conductivity in 62% of the hospitals. Sweat was collected for maximal 30-120 (median 55) minutes; only three centres used the maximal 30 minutes sample time recommended by the international guidelines. CONCLUSIONS: Sweat testing practice in Swiss hospitals was inconsistent and seldom followed the current international guidelines for sweat collection, analyzing method and reference values. Only 62% were used the chloride concentration as a diagnostic reference, the only accepted diagnostic measurement by the NCCLS or UK guidelines.
Resumo:
Wound healing disturbance is a common complication following surgery, but the underlying cause sometimes remains elusive. A 50-year-old Caucasian male developed an initially misunderstood severe wound healing disturbance following colon and abdominal wall surgery. An untreated alpha-1-antitrypsin (AAT) deficiency in the patient's medical history, known since 20 years and clinically apparent as a mild to moderate chronic obstructive pulmonary disease, was eventually found to be at its origin. Further clinical work-up showed AAT serum levels below 30% of the lower reference value; phenotype testing showed a ZZ phenotype and a biopsy taken from the wound area showed the characteristic, disease-related histological pattern of necrotising panniculitits. Augmentation therapy with plasma AAT was initiated and within a few weeks, rapid and adequate would healing was observed. AAT deficiency is an uncommon but clinically significant, possible cause of wound healing disturbances. An augmentation therapy ought to be considered in affected patients during the perioperative period.
Resumo:
In the Persian Gulf and the Gulf of Oman marl forms the primary sediment cover, particularly on the Iranian side. A detailed quantitative description of the sediment components > 63 µ has been attempted in order to establish the regional distribution of the most important constituents as well as the criteria governing marl sedimentation in general. During the course of the analysis, the sand fraction from about 160 bottom-surface samples was split into 5 phi° fractions and 500 to 800 grains were counted in each individual fraction. The grains were cataloged in up to 40 grain type catagories. The gravel fraction was counted separately and the values calculated as weight percent. Basic for understanding the mode of formation of the marl sediment is the "rule" of independent availability of component groups. It states that the sedimentation of different component groups takes place independently, and that variation in the quantity of one component is independent of the presence or absence of other components. This means, for example, that different grain size spectrums are not necessarily developed through transport sorting. In the Persian Gulf they are more likely the result of differences in the amount of clay-rich fine sediment brought in to the restricted mouth areas of the Iranian rivers. These local increases in clayey sediment dilute the autochthonous, for the most part carbonate, coarse fraction. This also explains the frequent facies changes from carbonate to clayey marl. The main constituent groups of the coarse fraction are faecal pellets and lumps, the non carbonate mineral components, the Pleistocene relict sediment, the benthonic biogene components and the plankton. Faecal pellets and lumps are formed through grain size transformation of fine sediment. Higher percentages of these components can be correlated to large amounts of fine sediment and organic C. No discernable change takes place in carbonate minerals as a result of digestion and faecal pellet formation. The non-carbonate sand components originate from several unrelated sources and can be distinguished by their different grain size spectrum; as well as by other characteristics. The Iranian rivers supply the greatest amounts (well sorted fine sand). Their quantitative variations can be used to trace fine sediment transport directions. Similar mineral maxima in the sediment of the Gulf of Oman mark the path of the Persian Gulf outflow water. Far out from the coast, the basin bottoms in places contain abundant relict minerals (poorly sorted medium sand) and localized areas of reworked salt dome material (medium sand to gravel). Wind transport produces only a minimal "background value" of mineral components (very fine sand). Biogenic and non-biogenic relict sediments can be placed in separate component groups with the help of several petrographic criteria. Part of the relict sediment (well sorted fine sand) is allochthonous and was derived from the terrigenous sediment of river mouths. The main part (coarse, poorly sorted sediment), however, was derived from the late Pleistocene and forms a quasi-autochthonous cover over wide areas which receive little recent sedimentation. Bioturbation results in a mixing of the relict sediment with the overlying younger sediment. Resulting vertical sediment displacement of more than 2.5 m has been observed. This vertical mixing of relict sediment is also partially responsible for the present day grain size anomalies (coarse sediment in deep water) found in the Persian Gulf. The mainly aragonitic components forming the relict sediment show a finely subdivided facies pattern reflecting the paleogeography of carbonate tidal flats dating from the post Pleistocene transgression. Standstill periods are reflected at 110 -125m (shelf break), 64-61 m and 53-41 m (e.g. coare grained quartz and oolite concentrations), and at 25-30m. Comparing these depths to similar occurrences on other shelf regions (e. g. Timor Sea) leads to the conclusion that at this time minimal tectonic activity was taking place in the Persian Gulf. The Pleistocene climate, as evidenced by the absence of Iranian river sediment, was probably drier than the present day Persian Gulf climate. Foremost among the benthonic biogene components are the foraminifera and mollusks. When a ratio is set up between the two, it can be seen that each group is very sensitive to bottom type, i.e., the production of benthonic mollusca increases when a stable (hard) bottom is present whereas the foraminifera favour a soft bottom. In this way, regardless of the grain size, areas with high and low rates of recent sedimentation can be sharply defined. The almost complete absence of mollusks in water deeper than 200 to 300 m gives a rough sedimentologic water depth indicator. The sum of the benthonic foraminifera and mollusca was used as a relative constant reference value for the investigation of many other sediment components. The ratio between arenaceous foraminifera and those with carbonate shells shows a direct relationship to the amount of coarse grained material in the sediment as the frequence of arenaceous foraminifera depends heavily on the availability of sand grains. The nearness of "open" coasts (Iranian river mouths) is directly reflected in the high percentage of plant remains, and indirectly by the increased numbers of ostracods and vertebrates. Plant fragments do not reach their ultimate point of deposition in a free swimming state, but are transported along with the remainder of the terrigenous fine sediment. The echinoderms (mainly echinoids in the West Basin and ophiuroids in the Central Basin) attain their maximum development at the greatest depth reached by the action of the largest waves. This depth varies, depending on the exposure of the slope to the waves, between 12 to 14 and 30 to 35 m. Corals and bryozoans have proved to be good indicators of stable unchanging bottom conditions. Although bryozoans and alcyonarian spiculae are independent of water depth, scleractinians thrive only above 25 to 30 m. The beginning of recent reef growth (restricted by low winter temperatures) was seen only in one single area - on a shoal under 16 m of water. The coarse plankton fraction was studied primarily through the use of a plankton-benthos ratio. The increase in planktonic foraminifera with increasing water depth is here heavily masked by the "Adjacent sea effect" of the Persian Gulf: for the most part the foraminifera have drifted in from the Gulf of Oman. In contrast, the planktonic mollusks are able to colonize the entire Persian Gulf water body. Their amount in the plankton-benthos ratio always increases with water depth and thereby gives a reliable picture of local water depth variations. This holds true to a depth of around 400 m (corresponding to 80-90 % plankton). This water depth effect can be removed by graphical analysis, allowing the percentage of planktonic mollusks per total sample to be used as a reference base for relative sedimentation rate (sedimentation index). These values vary between 1 and > 1000 and thereby agree well with all the other lines of evidence. The "pteropod ooze" facies is then markedly dependent on the sedimentation rate and can theoretically develop at any depth greater than 65 m (proven at 80 m). It should certainly no longer be thought of as "deep sea" sediment. Based on the component distribution diagrams, grain size and carbonate content, the sediments of the Persian Gulf and the Gulf of Oman can be grouped into 5 provisional facies divisions (Chapt.19). Particularly noteworthy among these are first, the fine grained clayey marl facies occupying the 9 narrow outflow areas of rivers, and second, the coarse grained, high-carbonate marl facies rich in relict sediment which covers wide sediment-poor areas of the basin bottoms. Sediment transport is for the most part restricted to grain sizes < 150 µ and in shallow water is largely coast-parallel due to wave action at times supplemented by tidal currents. Below the wave base gravity transport prevails. The only current capable of moving sediment is the Persian Gulf outflow water in the Gulf of Oman.
Resumo:
The present study analyzes residential models in coastal areas with large influxes of tourism, the sustainability of their planning and its repercussion on urban values. The project seeks to establish a methodology for territorial valuation through the analysis of externalities that have influenced urban growth and its impact on the formation of residential real estate values. This will make it possible to create a map for qualitative land valuation, resulting from a combination of environmental, landscape, social and productive valuations. This in turn will establish a reference value for each of the areas in question, as well as their spatial interrelations. These values become guidelines for the study of different territorial scenarios, which help improve the sustainable territorial planning process. This is a rating scale for urban planning. The results allow us to establish how the specific characteristics of the coast are valued and how they can be incorporated into sustainable development policies.
Resumo:
Gasification is a technology that can replace traditional management alternatives used up to date to deal with this waste (landfilling, composting and incineration) and which fulfils the social, environmental and legislative requirements. The main products of sewage sludge gasification are permanent gases (useful to generate energy or to be used as raw material in chemical synthesis processes), liquids (tars) and char. One of the main problems to be solved in gasification is tar production. Tars are organic impurities which can condense at relatively high temperatures making impossible to use the produced gases for most applications. This work deals with the effect of some primary tar removal processes (performed inside the gasifier) on sewage sludge gasification products. For this purpose, analysis of the gas composition, tar production, cold gas efficiency and carbon conversion were carried out. The tests were performed with air in a laboratory scale plant consisting mainly of a bubbling bed gasifier. No catalyzed and catalyzed (10% wt of dolomite in the bed and in the feeding) tests were carried out at different temperatures (750ºC, 800ºC and 850ºC) in order to know the effect of these parameters in the gasification products. As far as tars were concerned, qualitative and quantitative tar composition was determined. In all tests the Equivalence Ratio (ER) was kept at 0.3. Temperature is one of the most influential variables in sewage sludge gasification. Higher temperatures favoured hydrogen and CO production while CO2 content decreased, which might be partially explained by the effect of the cracking, Boudouard and CO2 reforming reactions. At 850ºC, cold gas efficiency and carbon conversion reached 49% and 76%, respectively. The presence of dolomite as catalyst increased the production of H2 reaching contents of 15.5% by volume at 850 °C. Similar behaviour was found for CO whereas CO2 and CnHm (light hydrocarbons) production decreased. In the presence of dolomite, a tar reduction of up to 51% was reached in comparison with no catalyzed tests, as well as improvements on cold gas efficiency and carbon conversion. Several assays were developed in order to test catalyst performance under more rough gasification conditions. For this purpose, the throughput value (TR), defined as kg sludge “as received” fed to the gasifier per hour and per m2 of cross sectional area of the gasifier, was modified. Specifically, the TR values used were 110 (reference value), 215 and 322 kg/h·m2. When TR increased, the H2, CO and CH4 production decreased while the CO2 and the CnHm production increased. Tar production increased drastically with TR during no catalysed tests what is related to the lower residence time of the gas inside the reactor. Nevertheless, even at TR=322 kg/h·m2, tar production decreased by nearly 50% with in-bed use of dolomite in comparison with no catalyzed assays under the same operating conditions. Regarding relative tar composition, there was an increase in benzene and naphthalene content when temperature increased while the content of the rest of compounds decreased. The dolomite seemed to be effective all over the range of molecular weight studied showing tar removal efficiencies between 35-55% in most cases. High values of the TR caused a significant increase in tar production but a slight effect on tar composition.
Resumo:
Linear Fresnel collector arrays present some relevant advantages in the domain of concentrating solar power because of their simplicity, robustness and low capital cost. However, they also present important drawbacks and limitations, notably their average concentration ratio, which seems to limit significantly the performance of these systems. First, the paper addresses the problem of characterizing the mirror field configuration assuming hourly data of a typical year, in reference to a configuration similar to that of Fresdemo. For a proper comparative study, it is necessary to define a comparison criterion. In that sense, a new variable is defined, the useful energy efficiency, which only accounts for the radiation that impinges on the receiver with intensities above a reference value. As a second step, a comparative study between central linear Fresnel reflectors and compact linear Fresnel reflectors is carried out. This analysis shows that compact linear Fresnel reflectors minimize blocking and shading losses compared to a central configuration. However this minimization is not enough to overcome other negative effects of the compact Fresnel collectors, as the greater dispersion of the rays reaching the receiver, caused by the fact that mirrors must be located farther from the receiver, which yields to lower efficiencies.
Resumo:
En esta investigación se ha estudiado el efecto de la variación de la temperatura en la deflexión de firmes flexibles. En primer lugar se han recopilado los criterios existentes de ajuste de la deflexión por efecto de la temperatura. Posteriormente, se ha llevado a cabo un estudio empírico mediante la auscultación de las deflexiones en cinco tramos de carretera con firme flexible y con diferentes espesores de mezclas bituminosas (entre 10 y 30 cm). Las medidas se han efectuado en dos campañas (verano e invierno), tratando de abarcar un amplio rango de temperaturas. En cada campaña, se han llevado a cabo distintas auscultaciones a diferentes temperaturas. Las medidas de cada campaña se han realizado el mismo día. Se han obtenido los coeficientes empíricos de ajuste por temperatura para cada tramo analizado. Además, se ha realizado un estudio teórico mediante la elaboración de diferentes modelos (multicapa elástico lineal, multicapa visco-elástico lineal y elementos finitos) que reproducen la respuesta estructural de los firmes flexibles auscultados. La caracterización mecánica de las mezclas bituminosas se ha realizado mediante ensayos de módulo complejo en laboratorio, a diferentes temperaturas y frecuencias, sobre testigos extraídos en las carreteras estudiadas. Se han calculado los coeficientes teóricos de ajuste por temperatura para cada modelo elaborado y tramo analizado. Finalmente, se ha realizado un estudio comparativo entre los distintos coeficientes de ajuste (existentes, empíricos y teóricos), que ha puesto de manifiesto que, en todos los casos analizados, los coeficientes obtenidos en el modelo de elementos finitos son los que más se aproximan a los coeficientes empíricos (valor de referencia para los tramos analizados). El modelo desarrollado de elementos finitos permite reproducir el comportamiento visco-elástico de las mezclas bituminosas y el carácter dinámico de las cargas aplicadas. Se han utilizado elementos tipo tetraedro isoparamétrico lineal (C3D8R) para el firme y la parte superior del cimiento, mientras que para la parte inferior se han empleado elementos infinitos (CIN3D8). In this research the effect produced by the temperature change on flexible pavements deflection is analysed. First, the existing criteria of deflection adjustment by temperature were collected. Additionally, an empirical analysis was carried out, consisting on deflection tests in five flexible-pavement road sections with different asphalt mix thickness (from 10 to 30 cm). The measures were taken in two seasons (summer and winter) in an effort to register a wide range of temperatures. Different surveys were carried out at different temperatures in each season. The tests of each season were done at the same day. The empirical temperature adjustment factors for every analysed section were obtained. A theoretical study was carried out by developing different models (linear elastic multilayer, linear visco-elastic multilayer and finite elements) that reproduce the structural response of the tested flexible pavements. The mechanical characterization of the asphalt mixes was achieved through laboratory complex-modulus tests at different temperatures and frequencies, using pavement cores from the surveyed roads. The theoretical temperature adjustment factors for each model developed and each section analysed were calculated. Finally, a comparative study among the different adjustment factors (existing, empirical and theoretical) was carried out. It has shown that, in all analysed cases, the factors obtained with the finite elements model are the closest to the empirical factors (reference value for the analysed sections). The finite elements model developed makes it possible to reproduce the visco-elastic behavior of the asphalt mixes and the dynamic nature of the applied loads. Linear isoparametric tetrahedral elements (C3D8R) have been used for the pavement and the subgrade, while infinite elements (CIN3D8) have been used for the foundations.
Resumo:
Ultrasound wave velocity was measured in 30 pieces of Spanish Scots pine (Pinus sylvestris L.), 90 x 140 mm in cross-section and 4 m long. Five different sensor placement arrangements were used: end to end (V0), face to opposite face, edge to opposite edge, face to same face and edge to same edge. The pieces were successively shortened to 3, 2 and 1 m, in order to obtain these velocities and their ratios to reference value V0 for different lengths and angles with respect to the piece axis for the crossed measurements. The velocity obtained in crossed measurements is lower than V0. A correction coefficient for crossed velocities is proposed, depending on the angle, to adjust them to the V0 benchmark. The velocities measured on a surface, are also lower than V0, and their ratio with respect to V0 is close to 0.97 for distances equal to or greater than 18 times the depth of the beam.
Resumo:
Los nombres que se usan actualmente para las grúas los ponen las empresas fabricantes y muy frecuentemente no tienen relación con su tipología ni con su capacidad. Por otra parte, es de uso común en la construcción, llamar a las grúas usando su tonelaje nominal que coincide normalmente con su capacidad máxima que se obtiene a radio mínimo. Existe una controversia por el uso de este valor ya que no suele definir bien la capacidad de las maquinas. En cuanto el radio de trabajo se aleja de sus valores mínimos, las grúas están limitadas por el momento de vuelco que no tiene porque comportarse de manera proporcional o ni siquiera relacionada con el valor de la capacidad nominal. Esto hace que comparar grúas mediante sus capacidades nominales (que son sus denominaciones) pueda inducir a errores importantes. Como alternativa, se pretende estudiar el uso de momento máximo de vuelco MLM por sus siglas en ingles (Maximum Load Moment) para intentar definir la capacidad real de las grúas. Se procede a realizar un análisis técnico y financiero de grúas con respecto a ambos valores mencionados con objeto de poder determinar cual de los dos parámetros es más fiable a la hora de definir la capacidad real de estas maquinas. Para ello, se seleccionan dentro de las tres tipologías más importantes por su presencia e importancia en la construcción (grúas de celosía sobre cadenas, grúas telescópicas sobre camión y grúas torre) nueve grúas de distintos tamaños y capacidades con objeto de analizar una serie de parámetros técnicos y sus costes. Se realizan de este modo diversas comparativas analizando los resultados en función de las tipologías y de los tamaños de las distintas maquinas. Para cada máquina se obtienen las capacidades y los momentos de vuelco correspondientes a distintos radios de trabajo. Asimismo, se obtiene el MLM y el coste hora de cada grúa, este último como suma de la amortización de la máquina, intereses del capital invertido, consumos, mantenimiento y coste del operador. Los resultados muestran las claras deficiencias del tonelaje nominal como valor de referencia para definir la capacidad de las grúas ya que grúas con el mismo tonelaje nominal pueden dar valores de capacidad de tres a uno (e incluso mayores) cuando los radios de trabajo son importantes. A raiz de este análisis se propone el uso del MLM en lugar del tonelaje nominal para la denominación de las grúas ya que es un parámetro mucho más fiable. Siendo conscientes de la dificultad que supone un cambio de esta entidad al tratarse de un uso común a nivel mundial, se indican posibles actuaciones concretas que puedan ir avanzando en esa dirección como seria por ejemplo la nomenclatura oficial de los fabricantes usando el MLM dentro del nombre de la grúa que también podría incluir la tipología o al menos alguna actuación legislativa sencilla como obligar al fabricante a indicar este valor en las tablas y características de cada máquina. El ratio analizado Coste horario de la grúa / MLM resulta ser de gran interés y permite llegar a la conclusión que en todas las tipologías de grúas, la eficiencia del coste por hora y por la capacidad (dada por el MLM) aumenta al aumentar la capacidad de la grúa. Cuando los tamaños de cada tipología se reducen, esta eficiencia disminuye y en algunos casos incluso drasticamente. La tendencia del mundo de la construcción de prefabricación y modularización que conlleva pesos y dimensiones de cargas cada vez más grandes, demandan cada vez grúas de mayor capacidad y se podría pensar en un primer momento que ante un crecimiento de capacidades tan significativo, el coste de las grúas se podría disparar y por ello disminuir la eficiencia de estas máquinas. A la vista de los resultados obtenidos con este análisis, no solo no ocurre este problema sino que se observa que dicho aumento de tamaños y capacidades de grúas redunda en un aumento de su eficiencia en cualquiera de las tipologías de estas maquinas que han sido estudiadas. The crane names that are actually used are given by crane manufacturers and, very frequently, they do not have any relationship with the crane type nor with its capacity. On the other hand, it is common in construction to use the nominal capacity (which corresponds in general to the capacity at minimum radius) as crane name. The use of this figure is controversial since it does not really reflect the real crane capacity. When the working radius increases a certain amount from its minimum values, the crane capacity starts to be limited by the crane tipping load and the moment is not really related to the value of the nominal capacity. Therefore, comparing cranes by their nominal capacity (their names) can lead to important mistakes. As an alternative, the use of the maximum load moment (MLM) can be studied for a better definition of real crane capacity. A technical and financial analysis of cranes is conducted using both parameters to determine which one is more reliable in order to define crane’s real capacity. For this purpose, nine cranes with different sizes and capacities will be selected within the most relevant crane types (crawler lattice boom cranes, telescopic truck mounted cranes and tower cranes) in order to analyze several parameters. The technical and economic results will be compared according to the crane types and sizes of the machines. For each machine, capacities and load moments are obtained for several working radius as well as MLM and hourly costs of cranes. Hourly cost is calculated adding up depreciation, interests of invested capital, consumables, maintenance and operator’s cost. The results show clear limitations for the use of nominal capacity as a reference value for crane definition since cranes with the same nominal capacity can have capacity differences of 3 to 1 (or even bigger) when working on important radius. From this analysis, the use of MLM as crane name instead of nominal capacity is proposed since it is a much more reliable figure. Being aware of the difficulty of such change since nominal capacity is commonly used worldwide; specific actions are suggested to progress in that direction. One good example would be that manufacturers would include MLM in their official crane names which could also include the type as well. Even legal action can be taken by simply requiring to state this figure in the crane charts and characteristics of every machine. The analyzed ratio: hourly cost / MLM is really interesting since it leads to the conclusion that for all crane types, the efficiency of the hourly cost divided by capacity (given by MLM) increases when the crane capacity is higher. When crane sizes are smaller, this efficiency is lower and can fall dramatically in certain cases. The developments in the construction world regarding prefabrication and modularization mean bigger weights and dimensions, which create a demand for bigger crane capacities. On a first approach, it could be thought that crane costs could rise significantly because of this capacity hugh increase reducing in this way crane efficiency. From the results obtained here, it is clear that it is definitely not the case but the capacity increase of cranes will end up in higher efficiency levels for all crane types that have been studied.
Resumo:
Em regiões altamente contaminadas como a região da Baixada Santista, é importante estabelecer metas para a recuperação do ambiente. Apesar da ausência da contaminação ser a meta ideal, as implicações e os custos associados a esse objetivo, demanda o estabelecimento de metas de recuperação realistas em relação aos contaminantes presentes na região. Com o objetivo de caracterizar valores de referência de qualidade de sedimentos para compostos orgânicos na região da Baixada Santista, o Canal de Bertioga foi escolhido como local de referência por ser uma região sem fontes industrais ou outras fontes pontuais relativas aos compostos analisados. Amostras de água, sedimento e ostras foram coletadas e os hidrocarbonetos policíclicos aromáticos (PAHs), bifenilos policlorados (PCBs) e pesticidas organoclorados (OCPs) foram determinados por técnicas cromatográficas. A avaliação dos resultados de análises de PAHs, permite afirmar com alguma segurança, que os valores da somatória de PAHs das amostras sedimentos são, na sua grande maioria, inferiores a 1.000 µg/kg, não superando 1.600 µg/kg, concentrações abaixo dos limites estabelecidos na Resolução CONAMA 344/04 e abaixo dos valores que possam causar algum efeito adverso à biota, conforme valores descritos na literatura. Resultados de análise de PCBs, OCPs, compostos fenólicos e compostos orgânicos voláteis (VOCs) em amostras de sedimento, indicaram concentrações destes compostos abaixo dos limites de quantificação, exceto DDE (5,30 g/kg) e HCB (2,34 g/kg), que foram detectados em apenas um ii sítio de amostragem. Não houve evidências de possíveis fontes de emissão próximas à região de referência para PCBs, OCPs, compostos fenólicos e VOCs. Finalizando, espera-se que os resultados obtidos neste estudo possam fornecer subsídios para futuramente estabelecer uma área de referência para qualidade de sedimento na região da Baixada Santista, ou ainda serem utilizados em conjunto com as avaliações de contaminantes inorgânicos, testes ecotoxicológicos e indicadores biológicos, como ferramenta para avaliação da qualidade de sedimento e/ou para a classificação de material a ser dragado na região da Baixada Santista.
Resumo:
A range of physical and engineering systems exhibit an irregular complex dynamics featuring alternation of quiet and burst time intervals called the intermittency. The intermittent dynamics most popular in laser science is the on-off intermittency [1]. The on-off intermittency can be understood as a conversion of the noise in a system close to an instability threshold into effective time-dependent fluctuations which result in the alternation of stable and unstable periods. The on-off intermittency has been recently demonstrated in semiconductor, Erbium doped and Raman lasers [2-5]. Recently demonstrated random distributed feedback (random DFB) fiber laser has an irregular dynamics near the generation threshold [6,7]. Here we show the intermittency in the cascaded random DFB fiber laser. We study intensity fluctuations in a random DFB fiber laser based on nitrogen doped fiber. The laser generates first and second Stokes components 1120 nm and 1180 nm respectively under an appropriate pumping. We study the intermittency in the radiation of the second Stokes wave. The typical time trace near the generation threshold of the second Stokes wave (Pth) is shown at Fig. 1a. From the number of long enough time-traces we calculate statistical distribution between major spikes in time dynamics, Fig. 1b. To eliminate contribution of high frequency components of spikes we use a low pass filter along with the reference value of the output power. Experimental data is fitted by power law,
Resumo:
A 2008-ban kezdődött gazdasági válság a korábbiaknál is fontosabbá tette az árakat a vásárlók számára. Azt eddig is mindenki tudta, hogy az árak alapvetően befolyásolják a fogyasztók vásárlási döntését. Arra a kérdésre azonban, hogy miképpen, már nem mindig tudunk pontos választ adni. A közgazdaságtan szerint az árak csökkenése növeli a fogyasztók vásárlási hajlandóságát és fordítva, az árak emelkedése kisebbíti azt. A valóság azonban nem mindig írható le közgazdaságtani fogalmakkal vagy matematikai képletekkel. _______ Since the beginning of the global economic recession prices have become more and more important for sellers and buyers. To study the role of prices in consumer behaviour is a rather new field of marketing research. The paper starts out from the fact that prices can be regarded as a multidimensional stimulus, which influences the purchasing decision of consumers. The study describes the process how, in this multidimensional pricing environment, consumers get from the perception through the evaluation of prices to the purchasing decision. According to the model constructed by the author the perception of prices depends on the presentation of prices and on the willingness and ability of people to numerically perceive and evaluate the different presentations of prices. In the process how consumers get from the perceived prices through the excepted prices to the purchasing decision the perceived value plays the most important role. The perceived value is motivated by the internal and external reference prices and the perceived reference value. The paper comes to the conclusion that in recession and post recession times, companies are compelled to understand these processes better to be able to set their price points according to the changing buyers behaviour.
Resumo:
This work concerns a refinement of a suboptimal dual controller for discrete time systems with stochastic parameters. The dual property means that the control signal is chosen so that estimation of the model parameters and regulation of the output signals are optimally balanced. The control signal is computed in such a way so as to minimize the variance of output around a reference value one step further, with the addition of terms in the loss function. The idea is add simple terms depending on the covariance matrix of the parameter estimates two steps ahead. An algorithm is used for the adaptive adjustment of the adjustable parameter lambda, for each step of the way. The actual performance of the proposed controller is evaluated through a Monte Carlo simulations method.
Resumo:
Purpose: It is important to establish a differential diagnosis between the different types of nystagmus, in order to give the appropriate clinical approach to every situation and to improve visual acuity. The nystagmus is normally blocked when the eyes are positioned in a particular way. This makes the child adopt a posture of ocular torticollis that reduces the nistagmiformes movements, improving the vision in this position. A way to promote the blocking of the nystagmic movements is by using prismatic lenses with opposite bases, to block or minimize the oscillatory movements. This results in a vision improvement and it reduces the anomalous head position. There is limited research on the visual results in children with nystagmus after using prisms with opposing bases. Our aim is to describe the impact on the visual acuity (VA ) of theprescription prism lenses in a nystagmus patient starting at 3 months of age. Methods: Case report on thirty month old caucasian male infant, with normal growth and development for their age, with an early onset of horizontal nystagmus at 3 months of age. Ophthalmic examination included slit lamp examination, fundus, refractive study, electrophysiological and magnetic resonance tests, measurement of VA over time with the Teller Acuity Cards (TAC ) in the distance agreed for the age. At age ten months, the mother noted a persistent turn to the right of the child’s head, which became increasingly more severe along the months. There’s no oscillopcia. At 24 months, an atropine refraction showed the following refractive error: 0D.: -1,50, OS: -0,50 and prismatic lens adapting OD 8 Δ nasal base and OE 8 Δ temporal base. Results: Thirty month old child, with adequate development for their age, with onset of idiopatic horizontal nystagmus, at 3 months of age. Normal ocular fundus and magnetic ressoance without alterations, sub-normal results in electrophysiological tests and VA with values below normal for age. At 6 months OD 20/300; OE 20/400; OU 20/300. At 9 months OD 20/250; OE 20/300; OU 20/150 (TAC a 38 cm). At 18 months OD 20/200; OE 20/100; OU 20/80 (TAC at 38 cm), when the head is turned to the right and the eyes in levoversão, the nystagmus decreases in a “neutral” area. At 24 month, with the prismatic glasses, OD 20/200 OE 20/100, OU20/80 (TAC at 54 cm, reference value is 20/30 – 20/100 para OU e 20/40 – 20/100 monocular), there was an increase in the visual acuity. The child did visual stimulation with multimedia devices and using glasses. After adaptation of prisms: at 30 months VA (with Cambridge cards) OD e OE = 6/18. The child improved the VA and reduced the anomalous head position. There is also improvement in mobility and fine motricity. Conclusion: Prisms with opposing bases., were used in the treatment of idiopathic nystagmus. Said prisms were adapted to reduce the skewed position of the head, and to improve VA and binocular function. Monitoring of visual acuity and visual stimulation was done using electronic devices. Following the use of prismatic, the patient improved significantly VA and the anomalous head position was reduced.