974 resultados para Pseudo-Lipchitzness
Resumo:
The phytoplankton standing crop was assessed in detail along the South Eastern Arabian Sea (SEAS) during the different phases of coastal upwelling in 2009.During phase 1 intense upwelling was observed along the southern transects (8◦N and 8.5◦N). The maximum chlorophyll a concentration (22.7 mg m −3) was observed in the coastal waters off Thiruvananthapuram (8.5◦N). Further north there was no signature of upwelling, with extensive Trichodesmium erythraeum blooms. Diatoms dominated in these upwelling regions with the centric diatom Chaetoceros curvisetus being the dominant species along the 8◦N transect. Along the 8.5◦N transect pennate diatoms like Nitzschia seriata and Pseudo-nitzschia sp. dominated. During phase 2, upwelling of varying intensity was observed throughout the study area with maximum chlorophyll a concentrations along the 9◦N transect (25 mg m−3) with Chaetoceros curvisetus as the dominant phytoplankton. Along the 8.5◦N transect pennate diatoms during phase 1 were replaced by centric diatoms like Chaetoceros sp. The presence of solitary pennate diatoms Amphora sp. and Navicula sp. were significant in the waters off Kochi. Upwelling was waning during phase 3 and was confined to the coastal waters of the southern transects with the highest chlorophyll a concentration of 11.2 mg m−3. Along with diatoms, dinoflagellate cell densities increased in phases 2 and 3. In the northern transects (9◦N and 10◦N) the proportion of dinoflagellates was comparatively higher and was represented mainly by Protoperidinium spp., Ceratium spp. and Dinophysis spp.
Resumo:
Considerable research effort has been devoted in predicting the exon regions of genes. The binary indicator (BI), Electron ion interaction pseudo potential (EIIP), Filter method are some of the methods. All these methods make use of the period three behavior of the exon region. Even though the method suggested in this paper is similar to above mentioned methods , it introduces a set of sequences for mapping the nucleotides selected by applying genetic algorithm and found to be more promising
Resumo:
Pollution of water with pesticides has become a threat to the man, material and environment. The pesticides released to the environment reach the water bodies through run off. Industrial wastewater from pesticide manufacturing industries contains pesticides at higher concentration and hence a major source of water pollution. Pesticides create a lot of health and environmental hazards which include diseases like cancer, liver and kidney disorders, reproductive disorders, fatal death, birth defects etc. Conventional wastewater treatment plants based on biological treatment are not efficient to remove these compounds to the desired level. Most of the pesticides are phyto-toxic i.e., they kill the microorganism responsible for the degradation and are recalcitrant in nature. Advanced oxidation process (AOP) is a class of oxidation techniques where hydroxyl radicals are employed for oxidation of pollutants. AOPs have the ability to totally mineralise the organic pollutants to CO2 and water. Different methods are employed for the generation of hydroxyl radicals in AOP systems. Acetamiprid is a neonicotinoid insecticide widely used to control sucking type insects on crops such as leafy vegetables, citrus fruits, pome fruits, grapes, cotton, ornamental flowers. It is now recommended as a substitute for organophosphorous pesticides. Since its use is increasing, its presence is increasingly found in the environment. It has high water solubility and is not easily biodegradable. It has the potential to pollute surface and ground waters. Here, the use of AOPs for the removal of acetamiprid from wastewater has been investigated. Five methods were selected for the study based on literature survey and preliminary experiments conducted. Fenton process, UV treatment, UV/ H2O2 process, photo-Fenton and photocatalysis using TiO2 were selected for study. Undoped TiO2 and TiO2 doped with Cu and Fe were prepared by sol-gel method. Characterisation of the prepared catalysts was done by X-ray diffraction, scanning electron microscope, differential thermal analysis and thermogravimetric analysis. Influence of major operating parameters on the removal of acetamiprid has been investigated. All the experiments were designed using central compoiste design (CCD) of response surface methodology (RSM). Model equations were developed for Fenton, UV/ H2O2, photo-Fenton and photocatalysis for predicting acetamiprid removal and total organic carbon (TOC) removal for different operating conditions. Quality of the models were analysed by statistical methods. Experimental validations were also done to confirm the quality of the models. Optimum conditions obtained by experiment were verified with that obtained using response optimiser. Fenton Process is the simplest and oldest AOP where hydrogen peroxide and iron are employed for the generation of hydroxyl radicals. Influence of H2O2 and Fe2+ on the acetamiprid removal and TOC removal by Fenton process were investigated and it was found that removal increases with increase in H2O2 and Fe2+ concentration. At an initial concentration of 50 mg/L acetamiprid, 200 mg/L H2O2 and 20 mg/L Fe2+ at pH 3 was found to be optimum for acetamiprid removal. For UV treatment effect of pH was studied and it was found that pH has not much effect on the removal rate. Addition of H2O2 to UV process increased the removal rate because of the hydroxyl radical formation due to photolyis of H2O2. An H2O2 concentration of 110 mg/L at pH 6 was found to be optimum for acetamiprid removal. With photo-Fenton drastic reduction in the treatment time was observed with 10 times reduction in the amount of reagents required. H2O2 concentration of 20 mg/L and Fe2+ concentration of 2 mg/L was found to be optimum at pH 3. With TiO2 photocatalysis improvement in the removal rate was noticed compared to UV treatment. Effect of Cu and Fe doping on the photocatalytic activity under UV light was studied and it was observed that Cu doping enhanced the removal rate slightly while Fe doping has decreased the removal rate. Maximum acetamiprid removal was observed for an optimum catalyst loading of 1000 mg/L and Cu concentration of 1 wt%. It was noticed that mineralisation efficiency of the processes is low compared to acetamiprid removal efficiency. This may be due to the presence of stable intermediate compounds formed during degradation Kinetic studies were conducted for all the treatment processes and it was found that all processes follow pseudo-first order kinetics. Kinetic constants were found out from the experimental data for all the processes and half lives were calculated. The rate of reaction was in the order, photo- Fenton>UV/ H2O2>Fenton> TiO2 photocatalysis>UV. Operating cost was calculated for the processes and it was found that photo-Fenton removes the acetamiprid at lowest operating cost in lesser time. A kinetic model was developed for photo-Fenton process using the elementary reaction data and mass balance equations for the species involved in the process. Variation of acetamiprid concentration with time for different H2O2 and Fe2+ concentration at pH 3 can be found out using this model. The model was validated by comparing the simulated concentration profiles with that obtained from experiments. This study established the viability of the selected AOPs for the removal of acetamiprid from wastewater. Of the studied AOPs photo- Fenton gives the highest removal efficiency with lowest operating cost within shortest time.
Resumo:
Durch Selbstorganisation gebildete monomolekulare Filme spielen eine große Rolle bei der gezielten Funktionalisierung von Oberflächen. Sie entstehen meist durch in situ Adsorption von Adsorbatmolekülen auf Substratoberflächen. Ankergruppen stellen dabei das mit dem Substrat wechselwirkende Molekülsegment dar. Dreizähnig konstruierte Ankergruppen, die über ein vierbindiges Verzweigungsatom im Molekül integriert sind, zeichnen sich durch den Vorteil aus, dass sie eine senkrechte Anordnung der Adsorbatmoleküle zur Substratoberfläche bevorzugen. Die Interaktion von schwefelhaltigen Ankergruppen in Form von Thioether-Einheiten mit Goldsubstraten stellt dabei ein besonders interessantes und vielversprechendes System dar und wurde in dieser Arbeit durch tripodale (Methylthio)methyl-Einheiten realisiert. Die Ordnung der Monolagen kann durch starr konstruierte Moleküle erhöht werden, da rigide Strukturelemente die intramolekulare Flexibilität der Moleküle herabsetzen. Es ist gelungen, im Rahmen dieser Arbeit zwei neue tripodale Thioetherliganden darzustellen. Die Dreizähnigkeit der Liganden konnte auf Basis molekularer Koordinationschemie belegt werden. Durch Synthese der Tricarbonylwolfram-(0)-Ligand-Komplexe konnten pseudo-oktaedrische Chelat-Komplexe eindeutig charakterisiert werden. Desweiteren ist es gelungen, die geordnete Monolagenbildung auf Gold zu bestätigen und vielfältig zu charakterisieren. Zwei neue ferrocenylfunktionalisierte Halogen-Derivate konnten dargestellt werden und erweitern damit die vorhandene Bibliothek dieser Substanzgruppe. Daraus ist in einer nächsten Stufe ein direktes Edukt für die Synthese ferrocenylfunktionalisierter Schwefel-Tripodliganden entstanden. Diese Arbeit war Teil eines Kooperationsprojekts mit der Arbeitsgruppe Experimentalphysik I in Rahmen des CINSaT. Die neu dargestellten Verbindungen wurden in dieser Arbeitsgruppe hinsichtlich ihrer Adsorptionseigenschaften auf Gold(111)-Oberflächen eingehend untersucht und die entstandenen Monolagen charakterisiert (SHG, Ellipsometrie, XPS, STM, FTIR). Ein Zweistufenprozeß der Filmbildung konnte entwickelt werden.
Resumo:
Globalization is widely regarded as the rise of the borderless world. However in practice, true globalization points rather to a “spatial logic” by which globalization is manifested locally in the shape of insular space. Globalization in this sense is not merely about the creation of physical fragmentation of space but also the creation of social disintegration. This study tries to proof that global processes also create various forms of insular space leading also to specific social implications. In order to examine the problem this study looks at two cases: China’s Pearl River Delta (PRD) and Jakarta in Indonesia. The PRD case reveals three forms of insular space namely the modular, concealed and the hierarchical. The modular points to the form of enclosed factories where workers are vulnerable for human-right violations due to the absent of public control. The concealed refers to the production of insular space by subtle discrimination against certain social groups in urban space. And the hierarchical points to a production of insular space that is formed by an imbalanced population flow. The Jakarta case attempts to show more types of insularity in relation to the complexity of a mega-city which is shaped by a culture of exclusion. Those are dormant and hollow insularity. The dormant refers to the genesis of insular– radical – community from a culture of resistance. The last type, the hollow, points to the process of making a “pseudo community” where sense of community is not really developed as well as weak social relationship with its surrounding. Although global process creates various expressions of territorial insularization, however, this study finds that the “line of flight” is always present, where the border of insularity is crossed. The PRD’s produces vernacular modernization done by peasants which is less likely to be controlled by the politics of insularization. In Jakarta, the culture of insularization causes urban informalities that have no space, neither spatially nor socially; hence their state of ephemerality continues as a tactic of place-making. This study argues that these crossings possess the potential for reconciling venue to defuse the power of insularity.
Resumo:
With Chinas rapid economic development during the last decades, the national demand for livestock products has quadrupled within the last 20 years. Most of that increase in demand has been answered by subsidized industrialized production systems, while million of smallholders, which still provide the larger share of livestock products in the country, have been neglected. Fostering those systems would help China to lower its strong urban migration streams, enhance the livelihood of poorer rural population and provide environmentally save livestock products which have a good chance to satisfy customers demand for ecological food. Despite their importance, China’s smallholder livestock keepers have not yet gained appropriate attention from governmental authorities and researchers. However, profound analysis of those systems is required so that adequate support can lead to a better resource utilization and productivity in the sector. To this aim, this pilot study analyzes smallholder livestock production systems in Xishuangbanna, located in southern China. The area is bordered by Lao and Myanmar and geographically counts as tropical region. Its climate is characterized by dry and temperate winters and hot summers with monsoon rains from May to October. While the regionis plain, at about 500 m asl above sea level in the south, outliers of the Himalaya mountains reach out into the north of Xishuangbanna, where the highest peak reaches 2400 m asl. Except of one larger city, Jinghong, Xishuangbanna mainly is covered by tropical rainforest, areas under agricultural cultivation and villages. The major income is generated through inner-Chinese tourism and agricultural production. Intensive rubber plantations are distinctive for the lowland plains while small-scaled traditional farms are scattered in the mountane regions. In order to determine the current state and possible future chances of smallholder livestock production in that region, this study analyzed the current status of the smallholder livestock sector in the Naban River National Nature Reserve (NRNNR), an area which is largely representative for the whole prefecture. It covers an area of about 50square kilometer and reaches from 470 up to 2400 m asl. About 5500 habitants of different ethnic origin are situated in 24 villages. All data have been collected between October 2007 and May 2010. Three major objectives have been addressed in the study: 1. Classifying existing pig production systems and exploring respective pathways for development 2. Quantifying the performance of pig breeding systemsto identify bottlenecks for production 3. Analyzing past and current buffalo utilization to determine the chances and opportunities of buffalo keeping in the future In order to classify the different pig production s ystems, a baseline survey (n=204, stratified cluster sampling) was carried out to gain data about livestock species, numbers, management practices, cultivated plant species and field sizes as well associo-economic characteristics. Sampling included two clusters at village level (altitude, ethnic affiliation), resulting in 13 clusters of which 13-17 farms were interviewed respectively. Categorical Principal Component Analysis (CatPCA) and a two-step clustering algorithm have been applied to identify determining farm characteristics and assort recorded households into classes of livestock production types. The variables keep_sow_yes/no, TLU_pig, TLU_buffalo, size_of_corn_fields, altitude_class, size_of_tea_plantationand size_of_rubber_fieldhave been found to be major determinants for the characterization of the recorded farms. All farms have extensive or semi-intensive livestock production, pigs and buffaloes are predominant livestock species while chicken and aquaculture are available but play subordinate roles for livelihoods. All pig raisers rely on a single local breed, which is known as Small Ear Pig (SMEP) in the region. Three major production systemshave been identified: Livestock-corn based LB; 41%), rubber based (RB; 39%) and pig based (PB;20%) systems. RB farms earn high income from rubber and fatten 1.9 ±1.80 pigs per household (HH), often using purchased pig feed at markets. PB farms own similar sized rubber plantations and raise 4.7 ±2.77 pigs per HH, with fodder mainly being cultivated and collected in theforest. LB farms grow corn, rice and tea and keep 4.6 ±3.32 pigs per HH, also fed with collected and cultivated fodder. Only 29% of all pigs were marketed (LB: 20%; RB: 42%; PB: 25%), average annual mortality was 4.0 ±4.52 pigs per farm (LB: 4.6 ±3.68; RB: 1.9 ±2.14; PB: 7.1 ±10.82). Pig feed mainly consists of banana pseudo stem, corn and rice hives and is prepared in batches about two to three times per week. Such fodder might be sufficient in energy content but lacks appropriate content of protein. Pigs therefore suffer from malnutrition, which becomes most critical in the time before harvest season around October. Farmers reported high occurrences of gastrointestinal parasites in carcasses and often pig stables were wet and filled with manure. Deficits in nutritional and hygienic management are major limits for development and should be the first issues addressed to improve productivity. SME pork was found to be known and referred by local customers in town and by richer lowland farmers. However, high prices and lacking availability of SME pork at local wet-markets were the reasons which limited purchase. If major management constraints are overcome, pig breeders (PB and LB farms) could increase the share of marketed pigs for town markets and provide fatteners to richer RB farmers. RB farmers are interested in fattening pigs for home consumption but do not show any motivation for commercial pig raising. To determine the productivity of input factors in pig production, eproductive performance, feed quality and quantity as well as weight development of pigs under current management were recorded. The data collection included a progeny history survey covering 184 sows and 437 farrows, bi-weekly weighing of 114 pigs during a 16-months time-span on 21 farms (10 LB and 11 PB) as well as the daily recording of feed quality and quantity given to a defined number of pigs on the same 21 farms. Feed samples of all recorded ingredients were analyzed for their respective nutrient content. Since no literature values on thedigestibility of banana pseudo stem – which is a major ingredient of traditional pig feed in NRNNR – were found, a cross-sectional digestibility trial with 2x4 pigs has been conducted on a station in the research area. With the aid of PRY Herd Life Model, all data have been utilized to determine thesystems’ current (Status Quo = SQ) output and the productivity of the input factor “feed” in terms of saleable life weight per kg DM feed intake and monetary value of output per kg DM feed intake.Two improvement scenarios were simulated, assuming 1) that farmers adopt a culling managementthat generates the highest output per unit input (Scenario 1; SC I) and 2) that through improved feeding, selected parameters of reproduction are improved by 30% (SC II). Daily weight gain averaged 55 ± 56 g per day between day 200 and 600. The average feed energy content of traditional feed mix was 14.92 MJ ME. Age at first farrowing averaged 14.5 ± 4.34 months, subsequent inter-farrowing interval was 11.4 ± 2.73 months. Littersize was 5.8 piglets and weaning age was 4.3 ± 0.99 months. 18% of piglets died before weaning. Simulating pig production at actualstatus, it has been show that monetary returns on inputs (ROI) is negative (1:0.67), but improved (1:1.2) when culling management was optimized so that highest output is gained per unit feed input. If in addition better feeding, controlled mating and better resale prices at fixed dates were simulated, ROI further increased to 1:2.45, 1:2.69, 1:2.7 and 1:3.15 for four respective grower groups. Those findings show the potential of pork production, if basic measures of improvement are applied. Futureexploration of the environment, including climate, market-season and culture is required before implementing the recommended measures to ensure a sustainable development of a more effective and resource conserving pork production in the future. The two studies have shown that the production of local SME pigs plays an important role in traditional farms in NRNNR but basic constraints are limiting their productivity. However, relatively easy approaches are sufficient for reaching a notable improvement. Also there is a demand for more SME pork on local markets and, if basic constraints have been overcome, pig farmers could turn into more commercial producers and provide pork to local markets. By that, environmentally safe meat can be offered to sensitive consumers while farmers increase their income and lower the risk of external shocks through a more diverse income generating strategy. Buffaloes have been found to be the second important livestock species on NRNNR farms. While they have been a core resource of mixed smallholderfarms in the past, the expansion of rubber tree plantations and agricultural mechanization are reasons for decreased swamp buffalo numbers today. The third study seeks to predict future utilization of buffaloes on different farm types in NRNNR by analyzing the dynamics of its buffalo population and land use changes over time and calculating labor which is required for keeping buffaloes in view of the traction power which can be utilized for field preparation. The use of buffaloes for field work and the recent development of the egional buffalo population were analyzed through interviews with 184 farmers in 2007/2008 and discussions with 62 buffalo keepers in 2009. While pig based farms (PB; n=37) have abandoned buffalo keeping, 11% of the rubber based farms (RB; n=71) and 100% of the livestock-corn based farms (LB; n=76) kept buffaloes in 2008. Herd size was 2.5 ±1.80 (n=84) buffaloes in early 2008 and 2.2 ±1.69 (n=62) in 2009. Field work on own land was the main reason forkeeping buffaloes (87.3%), but lending work buffaloes to neighbors (79.0%) was also important. Other purposes were transport of goods (16.1%), buffalo trade (11.3%) and meat consumption(6.4%). Buffalo care required 6.2 ±3.00 working hours daily, while annual working time of abuffalo was 294 ±216.6 hours. The area ploughed with buffaloes remained constant during the past 10 years despite an expansion of land cropped per farm. Further rapid replacement of buffaloes by tractors is expected in the near future. While the work economy is drastically improved by the use of tractors, buffaloes still can provide cheap work force and serve as buffer for economic shocks on poorer farms. Especially poor farms, which lack alternative assets that could quickly be liquidizedin times of urgent need for cash, should not abandon buffalo keeping. Livestock has been found to be a major part of small mixed farms in NRNNR. The general productivity was low in both analyzed species, buffaloes and pigs. Productivity of pigs can be improved through basic adjustments in feeding, reproductive and hygienic management, and with external support pig production could further be commercialized to provide pork and weaners to local markets and fattening farms. Buffalo production is relatively time intensive, and only will be of importance in the future to very poor farms and such farms that cultivate very small terraces on steep slopes. These should be encouraged to further keep buffaloes. With such measures, livestock production in NRNNR has good chances to stay competitive in the future.
Resumo:
Die hier frei verfügbare Foliensammlung stammt aus der gleichnamigen Master-Veranstaltung im Sommersemester 2014. Das zugehörige Skript (in Englisch) kann als PDF heruntergeladen werden von der Universität Turku (Lutz M. Wegner, Sorting – The Turku Lectures, Lecture Notes in Computing 2014, Univ. of Turku, Finland, http://tucs.fi/publications/attachment.php?fname=bWegner_LutzMx14a.full.pdf). Das überarbeitete Material geht auf eine dort im Jahre 1987 gehaltene Gastvorlesung zurück. Betrachtet werden Varianten von Quicksort und Heapsort, speziell bei Quicksort solche für Multimengen und Vorsortierung, sowohl für verkettete Listen als auch für Datenstrukturen mit Direktzugriff. Neben dem Standardheapsort werden die Floydverbesserung und Dijkstras Smoothsort vorgestellt. Eine Netzwerkvariante und zwei Externspeicherverfahren auf Basis von Quicksort und Heapsort werden untersucht. Der Stand der Technik bei den stabilen, in-situ Verfahren wird skizziert. Der Stoff eignet sich für eine Vorlesung mit angegliederten Projektübungen, in denen Studierende selbstständig Algorithmen, die als Pseudo-Code vorgegeben sind, in effizienten, lauffähigen Java-Code umsetzen und in einer Laufzeitbibliothek mit vorgegebenen Testdaten messen.
Resumo:
Shareholdern, Mitarbeitern und Konsumenten kommt im Rahmen der Corporate Social Responsibility (CSR) eine zentrale Rolle zu. Nicht zuletzt aufgrund ihrer ökonomischen Bedeutung für Unternehmen und ihrer Einflussmöglichkeiten auf diese werden sie zu den mithin wichtigsten Stakeholdern gezählt. Innerhalb der unternehmensethischen Diskussion setzt sich dabei verstärkt die Sichtweise eines Business Case von CSR durch, demzufolge CSR generell und insbesondere bei diesen drei Stakeholdern an Bedeutung gewinnt und ein entsprechendes Engagement daher neben finanziellen auch zahlreiche immaterielle Vorteile bedingt. Betrachtet man die Studienlage allerdings genauer, bleibt zu fragen, inwieweit das gezeichnete positive Bild und die ihm zugrunde liegenden Annahmen tatsächlich zutreffend sind. Denn weder liegen ausreichend Studien vor, die sich mit den Prozessen auf der Mikro-Ebene befassen, noch spiegelt sich die postulierte und von Konsumenten in Befragungen geäußerte Kauf- und Zahlungsbereitschaft im Marktanteil ethischer Produkte und Dienstleistungen wider, was im Allgemeinen dann allerdings wiederum durch ein „attitude-behaviour-gap“ erklärt wird. Mit Blick auf ein Socially Responsible Investment (SRI) stellt sich die Sach- oder Marktlage zwar besser dar, doch wird gemeinhin stillschweigend und unhinterfragt davon ausgegangen, dass ein ethisches Investment per se ethisch sei. Die Arbeit setzt an diesen Punkten an und geht nach einer Klärung zentraler Begriffe der Frage nach der Relevanz und Wahrnehmung von CSR bei den drei Stakeholdern anhand eigener qualitativer Fallstudien empirisch nach; d. h. konkret, mittels einer Einzelfallstudie unter der Mitarbeiterschaft eines mittelständischen Unternehmens, einer Konsumentenbefragung zum Kleidungskauf und einer Einzelfallstudie zur Überprüfung der Praxis ethischen Investments anhand des als besonders „ethisch“ ausgewiesenen Ethik-Fonds von Schellhammer & Schattera. Im Endergebnis zeigt sich, dass berechtigte Zweifel an den vielfach postulierten positiven Entwicklungen und Effekten und damit auch der Sichtweise eines Business Case von CSR angebracht sind. Denn selbst der gewählte ethische Fonds kann nicht alle an ein derartiges Investment zu stellenden Kriterien zweifelsfrei erfüllen. In eine ähnlich kritische Richtung weisen auch die Befunde der Konsumentenstudie. Durch die Verwendung eines anderen Untersuchungsansatzes zeigt sich, dass für den Großteil der befragten Konsumenten ethische Aspekte in Wirklichkeit keine oder wenn, eine allenfalls sehr untergeordnete Rolle spielen. Entsprechend handelt es sich möglicherweise beim „attitude-behaviour-gap“ vielfach nur um eine Pseudo-Inkonsistenz, die, wie aufgezeigt wird, theoretisch und methodisch bedingt ist. Im Vergleich dazu fallen die Befunde der Mitarbeiterstudie zwar sehr positiv aus, verweisen jedoch auch auf einen allgemein vernachlässigten zentralen Aspekt des CSR-Engagements. So können die empirisch belegten positiven Effekte zwar bestätigt werden, doch zeigt sich, dass diese wesentlich an die Bedingung der Authentizität von CSR geknüpft zu sein scheinen. Die sich hieraus wie aus den anderen Studien ergebenden Konsequenzen und Fragen werden im Rahmen einer Zusammenfassung abschließend diskutiert.
Resumo:
Market prices are well known to efficiently collect and aggregate diverse information regarding the value of commodities and assets. The role of markets has been particularly suitable to pricing financial securities. This article provides an alternative application of the pricing mechanism to marketing research - using pseudo-securities markets to measure preferences over new product concepts. Surveys, focus groups, concept tests and conjoint studies are methods traditionally used to measure individual and aggregate preferences. Unfortunately, these methods can be biased, costly and time-consuming to conduct. The present research is motivated by the desire to efficiently measure preferences and more accurately predict new product success, based on the efficiency and incentive-compatibility of security trading markets. The article describes a novel market research method, pro-vides insight into why the method should work, and compares the results of several trading experiments against other methodologies such as concept testing and conjoint analysis.
Resumo:
El propósito de esta revisión fue demostrar que el Síndrome Doloroso Regional Complejo, se convirtió en el recurso seudocientífico que explica una evolución no convencional de patologías relativamente anodinas, diagnosticándose sin la rigurosidad ajustada a los criterios, esencialmente clínicos, propuestos por las organizaciones internacionales que estudian este problema. Para probar esa hipótesis, los investigadores efectuamos revisión de historias clínicas de pacientes que, habiendo presentado una contingencia de origen ocupacional, de la que se derivaron distinto tipo de lesiones que denominamos diagnóstico primario, en algún momento de su evolución y sin importar la especialidad que los estuviere tratando, se les realizó un diagnóstico de SDRC con el cual fueron remitidos a Clínica del Dolor para valoración y manejo. La revisión de los expedientes se realizó a la luz de tres distintas, aunque entrañablemente relacionadas, escalas para el diagnóstico de presunción del mencionado síndrome: La propuesta por Gibbons en el año 1992, la formulada por la Asociación Internacional para el Estudio del Dolor (IASP) en 1994 y la emanada del Consenso de Budapest en el 2003, que pretende llenar las imprecisiones de la escala IASP. Los investigadores revisamos 89 expedientes de los cuales 63 cumplieron con los criterios de selección de caso; encontrando que en el 68% de estos, los criterios diagnósticos propuestos por las tres escalas, no se cumplieron y que en el 28,5%, las Clínicas del Dolor tampoco cumplieron con su rol de autoridad técnica, en la medida en que avalaron una condición no probada suficientemente.
Resumo:
El consentimiento informado es la expresión de la voluntad del paciente, relacionada con una intervención o un tratamiento terapéutico que se hará en su cuerpo, para que se dé aquel; previamente el profesional de la salud debe suministrar veraz, integral y oportunamente la información referente a los riesgos, los procedimientos, las expectativas, el diagnóstico y el pronóstico de su enfermedad y su respectivo tratamiento. Por lo tanto, del consentimiento informado se derivan obligaciones y derechos, tanto para el paciente como para el profesional de la salud, al ser un elemento tan especial y esencial en la relación médico-paciente, y en el mismo acto médico, cobra especial relevancia en la responsabilidad médica. Sin embargo, en la praxis muchas veces se olvida la importancia del consentimiento informado en relación con aquella. El propósito de este artículo consiste en analizar los elementos estructurales del consentimiento informado, los cuales son fundamentales a la hora en que el médico lo solicite. En dicho escenario surgen situaciones que generan dudas jurídicas en cuanto a su formación. De igual forma, para tener una idea concreta y completa del consentimiento informado, dividiremos este artículo en dos partes. En la primera, nos ocuparemos del desarrollo doctrinal del concepto, en el que analizaremos su noción, su evolución en la historia y sus modelos (beneficencia, paternalismo, autonomía, entre otros); posteriormente, estudiaremos la problemática que se puede originar durante el proceso de informar al paciente, debido a que no en todos los casos este está en pleno uso de sus facultades para otorgar su consentimiento. Es allí donde miraremos la capacidad en el ámbito médico y los casos en que se puede pedir un consentimiento informado directo e indirecto. Finalmente, analizaremos el tema del riesgo y el desarrollo que la doctrina ha dado al respecto, para revisar en último lugar el consentimiento en la historia clínica y su solicitud correcta al paciente. En la segunda parte, nos encargaremos de examinar detenidamente el desarrollo jurisprudencial del consentimiento informado surtido en la Corte Constitucional y en el Consejo de Estado de la República de Colombia, desde 1991 a la fecha, del cual sin duda alguna el lector podrá concluir lo esencial e imperioso de este en la práctica médica y en la prevención del daño antijurídico.
Resumo:
El presente documento es un estudio detallado del problema conocido bajo el título de Problema de Alhacén. Este problema fue formulado en el siglo X por el filósofo y matemático árabe conocido en occidente bajo el nombre de Alhacén. El documento hace una breve presentación del filósofo y una breve reseña de su trascendental tratado de óptica Kitab al-Manazir. A continuación el documento se detiene a estudiar cuidadosamente los lemas requeridos para enfrentar el problema y se presentan las soluciones para el caso de los espejos esféricos (convexos y cóncavos), cilíndricos y cónicos. También se ofrece una conjetura que habría de explicar la lógica del descubrimiento implícita en la solución que ofreció Alhacén. Tanto los lemas como las soluciones se han modelado en los software de geometría dinámica Cabri II-Plus y Cabri 3-D. El lector interesado en seguir dichas modelaciones debe contar con los programas mencionados para adelantar la lectura de los archivos. En general, estas presentaciones constan de tres partes: (i) formulación del problema (se formula en forma concisa el problema); (ii) esquema general de la construcción (se presentan los pasos esenciales que conducen a la construcción solicitada y las construcciones auxiliares que demanda el problema), esta parte se puede seguir en los archivos de Cabri; y (iii) demostración (se ofrece la justificación detallada de la construcción requerida). Los archivos en Cabri II plus cuentan con botones numerados que pueden activarse haciendo “Click” sobre ellos. La numeración corresponde a la numeración presente en el documento. El lector puede desplazar a su antojo los puntos libres que pueden reconocerse porque ellos se distinguen con la siguiente marca (º). Los puntos restantes no pueden modificarse pues son el resultado de construcciones adelantadas y ajustadas a los protocolos recomendados en el esquema general.
Resumo:
Este trabajo examina si existe evidencia de que el incremento de la oferta laboral femenina, durante las últimas tres décadas, afecta las condiciones laborales de los hombres en términos de empleo e ingresos en un país en desarrollo como Colombia.
Resumo:
El presente trabajo se centra en el procesamiento silábico, dentro del ámbito del acceso léxico, tratando de aportar evidencia empírica sobre la relevancia de la sílaba como unidad de procesamiento en castellano. Experimento 1: 18 estudiantes de Psicología. Experimento 2: 20 estudiantes de Psicología. Experimento 3: 20 estudiantes universitarios. Experimento 4: 22 estudiantes de Psicología. Se ha construído en primer lugar un diccionario de frecuencia silábica en castellano, con el fin de analizar la distribución estadística de la frecuencia silábica. A continuación se llevaron a cabo dos experimentos realizados con las dos metodologías más utilizadas en los estudios sobre acceso léxico (Naming y TDL), que arrojan alguna luz sobre cómo afecta la frecuencia de las sílabas al reconocimiento de palabras. Se trata, por tanto, de constatar dos hipótesis: por un lado la hipótesis que defiende la realidad psicológica de la sílaba, al menos en los idiomas 'superficiales', y su entidad como código de acceso al significado, y, por otro, la hipótesis de la 'redundancia ortográfica' (Seidenberg, 1987-1989) que reduce los efectos silábicos obtenidos a la mera redundancia estadística de las letras. Para ello se construyó otro diccionario de frecuencia de bigramas en el castellano y se llevaron a cabo dos experimentos con las mismas técnicas (Naming y TDL), manipulando la frecuencia silábica y controlando la frecuencia de bigramas. Parece que la frecuencia silábica posicional ejerce una influencia notable en el reconocimiento visual de palabras. Además es un efecto estable que hemos obtenido tanto con palabras como con pseudo-palabras y con las dos metodologías utilizadas. Cuanto más frecuentes son las sílabas en una posición dada de una palabra más tiempo invertimos en procesarla; es un efecto contrario al de la frecuencia léxica. Estos dos efectos son independientes, no se produjo interacción en ninguno de los cuatro experimentos. Todo parece indicar que sus respectivas influencias en el reconocimiento de palabras ocurren en niveles diferentes y que ambas variables no comparten los mismos mecanismos: son sistemas modulares, no es la redundancia de las letras la variable que explica los efectos de la sílaba. Parece que los hispano-parlantes utilizamos habitualmente una ruta no directa al acceso de significado, y que segmentamos las palabras en sílabas en un nivel de procesamiento diferente al nivel de comprensión léxica. Proponemos un modelo en el que el procesamiento de nivel fonológico o subléxico de la palabra es computacionalmente previo al nivel propiamente léxico. Con este trabajo no se agotan todos los interrogantes sobre la frecuencia silábica. Por ejemplo, es probable que no todas las sílabas de una palabra tengan el mismo peso en el procesamiento. Queda también por comprobar si se obtiene el mismo efecto de la frecuencia silábica en presentación auditiva. También sería interesante comprobar qué peso tiene la estructura de la sílaba (en cuanto a distribución de vocales y consonantes) en relación con la frecuencia.
Resumo:
Este artículo toma como punto de partida una curiosa controversia, desencadenada por un profesor de Física de la Universidad de Nueva York, llamado Alan Sokal. Dicha controversia se generó a partir de la publicción en una prestigiosa revista científica de un artículo incomprensible, aunque escrito en 'lenguaje culto'. Y que en realidad era una parodia feroz dirigida a criticar a los sobrevalorados postmodernistas franceses. La futilidad del discurso vacío en el mundo de la ciencia, o si se quiere la jerga pseudo-científica, es tomada por el autor para plantear la necesidad de una renovación o 'limpieza', en este caso, de las bases científicas de la Tecnología de la Educación. Sobre la base de considerar a la Tecnología de la Educación como una 'materia transversal' en el campo de la enseñanza, este trabajo se propone la actualización de algunas de las ciencias-soporte, en algunos casos, y la integración de nuevas fundamentaciones científicas, en otros. Lo que nos debe llevar a una actualización del papel de la Tecnología en la Educación. Así, aspectos como la necesidad de renovar los fundamentos de la Comunicación Audiovisual; considerar las aportaciones del Pensamiento de Sistemas; identificar el interés de la Psicología Constructivista, o incorporar los hallazgos metodológicos que aporta la Teoría Sociocultural para la investigación y el diseño de situaciones de enseñanza, son algunos de los elementos claves propuestos que el lector encontrará en este artículo .