977 resultados para Over-complete Discretewavelet Transformation
Resumo:
OBJECTIVES Megacystis (MC) is rare and often associated with other structural and chromosomal anomalies. In euploid cases with early oligohydramnios, prognosis is poor mainly due to pulmonary hypoplasia and renal damage. We report our experience of the past 20 years. METHODS A retrospective review of cases with prenatally diagnosed MC was performed. Complete prenatal as well as postnatal medical records from 1989 to 2009 were reviewed focusing on diagnostic precision, fetal interventions [vesicocentesis (VC), vesicoamniotic shunt (VAS)], short- and long-term outcome, and potential prognostic factors. RESULTS 68 cases were included. Follow-up was available in 54 cases (9 girls and 45 boys including 3 cases with aneuploidy). We found 39 isolated MC at sonography (5 girls and 34 boys). 24 fetuses with isolated MC underwent VC and VAS at 19.6 ± 6.3 and 20 ± 4.9 weeks of gestation, respectively. Survival rate was higher in male than in female fetuses (51 vs. 33%). Renal problems occurred in 4/14 prenatally treated fetuses and in 1/10 when cases with prune belly syndrome (PBS) were excluded from the analysis. CONCLUSIONS Our study shows that a careful selection of cases with MC excluding fetuses with PBS and early treatment has still the potential to improve outcome.
Resumo:
Research on lifestyle physical activity interventions suggests that they help individuals meet the new recommendations for physical activity made by the Centers for Disease Control and Prevention (CDC) and the American College of Sports Medicine (ACSM). The purpose of this research was to describe the rates of adherence to two lifestyle physical activity intervention arms and to examine the association between adherence and outcome variables, using data from Project PRIME, a lifestyle physical activity intervention based on the transtheoretical model and conducted by the Cooper Institute of Aerobics Research, Dallas, Texas. Participants were 250 sedentary healthy adults, aged 35 to 70 years, primarily non-Hispanic White, and in the contemplation and preparation stages of readiness to change. They were randomized to a group (PRIME G) or a mail- and telephone-delivered condition (PRIME C). Adherence measures included attending class (PRIME G), completing a monthly telephone call with a health educator (PRIME C), and completing homework assignments and self-monitoring minutes of moderate- to vigorous physical activity (both groups). In the first results paper, adherence over time and between conditions was examined: Attendance in group, completing the monthly telephone call, and homework completion decreased over time, and participants in PRIME G were more likely to complete homework than those in PRIME C. Paper 2 aimed to determine whether the adherence measures predicted achievement of the CDC/ACSM physical activity guideline. In separate models for the two conditions, a latent variable measuring adherence was found to predict achievement of the guideline. Paper 3 examined the association between adherence measures and the transtheoretical model's processes of change within each condition. For both, participants who completed at least two thirds of the homework assignments improved their use of the processes of change more than those who completed less than that amount. These results suggest that encouraging adherence to a lifestyle physical activity intervention, at least among already motivated volunteers, may increase the likelihood of beneficial changes in the outcomes. ^
Resumo:
By looking at Great Britain and the American colonies in conjunction with the larger British Atlantic Empire, historians can better understand the political, social, and cultural transformations that occurred when transatlantic actors met. William Samuel Johnson is an example of an "ordinary" agent who nonetheless had extensive contacts with numerous British and American thinkers. While acting on Connecticut's behalf in London between 1767 and 1771, he sent reports back to Connecticut governors Jonathan Trumbull and William Pitkin on parliamentary proceedings while corresponding with the people who traveled around the Atlantic world during this critical period-merchants, seafarers, emigrants, soldiers, missionaries, radicals and conservatives, reformers, and politicians. He is also representative of the late eighteenth-century empire writ large. Agents, who had once been a source of stability in the far-flung colonies, became a destabilizing force as confusion and conflict grew over conceptual ideas of what constituted "the empire" and who was included in it. Johnson was a sane observer in the midst of the ideological and administrative upheaval of the 1760's and 1770's. His subsequent loyalism and political obscurity during the war years was in many ways a result of his attempts to reconcile various factional interests during his tenure as an agent. Although he did his best to resolve these divisions and provide an accurate account of the powerful nationalistic forces gathering on both sides of the Atlantic on the eve of the American Revolution, the agents' collective failures as transatlantic mediators helped bring about the collapse of an imperial community. This disintegration had dramatic effects on the whole of the Atlantic world.
Resumo:
Introduction. Cancer is the second most common cause of death in the USA (2). Studies have shown a coexistence of cancer and hypogonadism (9,31,13). The majority of patients with cancer develop cachexia, which cannot be solely explained by anorexia seen in these patients. Testosterone is a male sex hormone which is known to increase muscle mass and strength, maintain cancellous bone mass, and increase cortical bone mass, in addition to improving libido, sexual desire, and fantasy (14). If a high prevalence of hypogonadism is detected in male cancer patients, and a significant difference exists in testosterone levels in cancer patients with cachexia versus those without cachexia, testosterone may be administered in future randomized trials to help alleviate cachexia. Study group and design The study group consisted of male cancer patients and non-cancer controls aged between 40 and 70 years. The primary study design was cross-sectional with a sample size of 135. The present data analysis is done on a subset convenience sample of 72 patients recruited between November 2006 and January 2010. ^ Methods. Patients aged 40-70 years with or without a diagnosis of cancer were recruited into the study. All patients with a BMI over 35, significant edema, non-melanomatous skin cancer, current alcohol or illicit drug abuse, concomitant usage of medications interfering with gonadal axis, and anabolic agents, patients on tube feeds or parenteral nutrition within 3 months prior to enrollment were excluded from the study. The study was approved by the Institutional Review Board of Baylor College of Medicine and is being conducted at the Michael E. DeBakey Veterans Affairs Medical Center at Houston. My thesis is a pilot data analysis that employs a smaller subset convenience sample of 72 patients determined by using the data available for the 72 patients (of the intended sample of 135 patients) recruited between November 2006 and January 2010. The primary aim of this analysis is to compare the proportion of patients with hypogonadism in the male cancer and non-cancer control groups, and to evaluate if a significant difference exists with respect to testosterone levels in male cancer patients with cachexia versus those without cachexia. The procedures of the study relevant to the current data analysis included blood collection to measure levels of testosterone and measurement of body weight to categorize cancer patients into cancer cachexia and cancer non-cachexia sub-groups. ^ Results. After logarithmic transformation of data of cancer and control groups, the unpaired t test with unequal variances was done. The proportion of patients with hypogonadism in the male cancer and non-cancer control groups was 47.5% and 22.7% with a Pearson chi2 statistic of 1.6036 and a p value of 0.205. Comparing the mean calculated Bioavailable testosterone in male cancer patients and non-cancer controls resulted in a t statistic of 21.83 and a p value less than 0.001. When the cancer group alone was taken, the mean free testosterone, calculated bioavailable testosterone and total testosterone levels in the cancer non-cachexia sub-group were 3.93, 5.09, 103.51 respectively and in the cancer cachexia sub-group were 3.58, 4.17, 84.08 respectively. The unpaired t test with equal variances showed that the two sub-groups had p values of 0.2015, 0.1842, and 0.4894 with respect to calculated bioavailable testosterone, free testosterone, and total testosterone respectively. ^ Conclusions. The small sample size of this exploratory study, resulting in a small power, does not allow us to draw definitive conclusions. For the given sub-sample, the proportion of patients with hypogonadism in the cancer group was not significantly different from that of patients with hypogonadism in the control group. Inferences on prevalence of hypogonadism in male cancer patients could not be made in this paper as the sub-sample is small and therefore not representative of the general population. However, there was a statistically significant difference in calculated Bioavailable testosterone levels in male cancer patients versus non-cancer controls. Analysis of cachectic and non-cachectic patients within the male cancer group showed no significant difference in testosterone levels (total, free, and calculated bioavailable testosterone) between both sub-groups. However, to re-iterate, this study is exploratory and the results may change once the complete dataset is obtained and analyzed. It however serves as a good template to guide further research and analysis.^
Resumo:
Glioblastoma multiforme (GBM) is an aggressive, high grade brain tumor. Microarray studies have shown a subset of GBMs with a mesenchymal gene signature. This subset is associated with poor clinical outcome and resistance to treatment. To establish the molecular drivers of this mesenchymal transition, we correlated transcription factor expression to the mesenchymal signature and identified transcriptional co-activator with PDZ-binding motif (TAZ) to be highly associated with the mesenchymal shift. High TAZ expression correlated with worse clinical outcome and higher grade. These data led to the hypothesis that TAZ is critical to the mesenchymal transition and aggressive clinical behavior seen in GBM. We investigated the expression of TAZ, its binding partner TEAD, and the mesenchymal marker FN1 in human gliomas. Western analyses demonstrated increased expression of TAZ, TEAD4, and FN1 in GBM relative to lower grade gliomas. We also identified CpG islands in the TAZ promoter that are methylated in most lower grade gliomas, but not in GBMs. TAZ-methylated glioma stem cell (GSC) lines treated with a demethylation agent showed an increase in mRNA and protein TAZ expression; therefore, methylation may be another novel way TAZ is regulated since TAZ is epigenetically silenced in tumors with a better clinical outcome. To further characterize the role of TAZ in gliomagenesis, we stably silenced or over-expressed TAZ in GSCs. Silencing of TAZ decreased invasion, self-renewal, mesenchymal protein expression, and tumor-initiating capacity. Over-expression of TAZ led to an increase in invasion, mesenchymal protein expression, mesenchymal differentiation, and tumor-initiating ability. These actions are dependent on TAZ interacting with TEAD since all these effects were abrogated with TAZ could not bind to TEAD. We also show that TAZ and TEAD directly bind to mesenchymal gene promoters. Thus, TAZ-TEAD interaction is critically important in the mesenchymal shift and in the aggressive clinical behavior of GBM. We identified TAZ as a regulator of the mesenchymal transition in gliomas. TAZ could be used as a biomarker to both estimate prognosis and stratify patients into clinically relevant subgroups. Since mesenchymal transition is correlated to tumor aggressiveness, strategies to target and inhibit TAZ-TEAD and the downstream gene targets may be warranted in alternative treatment.
Resumo:
In the Persian Gulf and the Gulf of Oman marl forms the primary sediment cover, particularly on the Iranian side. A detailed quantitative description of the sediment components > 63 µ has been attempted in order to establish the regional distribution of the most important constituents as well as the criteria governing marl sedimentation in general. During the course of the analysis, the sand fraction from about 160 bottom-surface samples was split into 5 phi° fractions and 500 to 800 grains were counted in each individual fraction. The grains were cataloged in up to 40 grain type catagories. The gravel fraction was counted separately and the values calculated as weight percent. Basic for understanding the mode of formation of the marl sediment is the "rule" of independent availability of component groups. It states that the sedimentation of different component groups takes place independently, and that variation in the quantity of one component is independent of the presence or absence of other components. This means, for example, that different grain size spectrums are not necessarily developed through transport sorting. In the Persian Gulf they are more likely the result of differences in the amount of clay-rich fine sediment brought in to the restricted mouth areas of the Iranian rivers. These local increases in clayey sediment dilute the autochthonous, for the most part carbonate, coarse fraction. This also explains the frequent facies changes from carbonate to clayey marl. The main constituent groups of the coarse fraction are faecal pellets and lumps, the non carbonate mineral components, the Pleistocene relict sediment, the benthonic biogene components and the plankton. Faecal pellets and lumps are formed through grain size transformation of fine sediment. Higher percentages of these components can be correlated to large amounts of fine sediment and organic C. No discernable change takes place in carbonate minerals as a result of digestion and faecal pellet formation. The non-carbonate sand components originate from several unrelated sources and can be distinguished by their different grain size spectrum; as well as by other characteristics. The Iranian rivers supply the greatest amounts (well sorted fine sand). Their quantitative variations can be used to trace fine sediment transport directions. Similar mineral maxima in the sediment of the Gulf of Oman mark the path of the Persian Gulf outflow water. Far out from the coast, the basin bottoms in places contain abundant relict minerals (poorly sorted medium sand) and localized areas of reworked salt dome material (medium sand to gravel). Wind transport produces only a minimal "background value" of mineral components (very fine sand). Biogenic and non-biogenic relict sediments can be placed in separate component groups with the help of several petrographic criteria. Part of the relict sediment (well sorted fine sand) is allochthonous and was derived from the terrigenous sediment of river mouths. The main part (coarse, poorly sorted sediment), however, was derived from the late Pleistocene and forms a quasi-autochthonous cover over wide areas which receive little recent sedimentation. Bioturbation results in a mixing of the relict sediment with the overlying younger sediment. Resulting vertical sediment displacement of more than 2.5 m has been observed. This vertical mixing of relict sediment is also partially responsible for the present day grain size anomalies (coarse sediment in deep water) found in the Persian Gulf. The mainly aragonitic components forming the relict sediment show a finely subdivided facies pattern reflecting the paleogeography of carbonate tidal flats dating from the post Pleistocene transgression. Standstill periods are reflected at 110 -125m (shelf break), 64-61 m and 53-41 m (e.g. coare grained quartz and oolite concentrations), and at 25-30m. Comparing these depths to similar occurrences on other shelf regions (e. g. Timor Sea) leads to the conclusion that at this time minimal tectonic activity was taking place in the Persian Gulf. The Pleistocene climate, as evidenced by the absence of Iranian river sediment, was probably drier than the present day Persian Gulf climate. Foremost among the benthonic biogene components are the foraminifera and mollusks. When a ratio is set up between the two, it can be seen that each group is very sensitive to bottom type, i.e., the production of benthonic mollusca increases when a stable (hard) bottom is present whereas the foraminifera favour a soft bottom. In this way, regardless of the grain size, areas with high and low rates of recent sedimentation can be sharply defined. The almost complete absence of mollusks in water deeper than 200 to 300 m gives a rough sedimentologic water depth indicator. The sum of the benthonic foraminifera and mollusca was used as a relative constant reference value for the investigation of many other sediment components. The ratio between arenaceous foraminifera and those with carbonate shells shows a direct relationship to the amount of coarse grained material in the sediment as the frequence of arenaceous foraminifera depends heavily on the availability of sand grains. The nearness of "open" coasts (Iranian river mouths) is directly reflected in the high percentage of plant remains, and indirectly by the increased numbers of ostracods and vertebrates. Plant fragments do not reach their ultimate point of deposition in a free swimming state, but are transported along with the remainder of the terrigenous fine sediment. The echinoderms (mainly echinoids in the West Basin and ophiuroids in the Central Basin) attain their maximum development at the greatest depth reached by the action of the largest waves. This depth varies, depending on the exposure of the slope to the waves, between 12 to 14 and 30 to 35 m. Corals and bryozoans have proved to be good indicators of stable unchanging bottom conditions. Although bryozoans and alcyonarian spiculae are independent of water depth, scleractinians thrive only above 25 to 30 m. The beginning of recent reef growth (restricted by low winter temperatures) was seen only in one single area - on a shoal under 16 m of water. The coarse plankton fraction was studied primarily through the use of a plankton-benthos ratio. The increase in planktonic foraminifera with increasing water depth is here heavily masked by the "Adjacent sea effect" of the Persian Gulf: for the most part the foraminifera have drifted in from the Gulf of Oman. In contrast, the planktonic mollusks are able to colonize the entire Persian Gulf water body. Their amount in the plankton-benthos ratio always increases with water depth and thereby gives a reliable picture of local water depth variations. This holds true to a depth of around 400 m (corresponding to 80-90 % plankton). This water depth effect can be removed by graphical analysis, allowing the percentage of planktonic mollusks per total sample to be used as a reference base for relative sedimentation rate (sedimentation index). These values vary between 1 and > 1000 and thereby agree well with all the other lines of evidence. The "pteropod ooze" facies is then markedly dependent on the sedimentation rate and can theoretically develop at any depth greater than 65 m (proven at 80 m). It should certainly no longer be thought of as "deep sea" sediment. Based on the component distribution diagrams, grain size and carbonate content, the sediments of the Persian Gulf and the Gulf of Oman can be grouped into 5 provisional facies divisions (Chapt.19). Particularly noteworthy among these are first, the fine grained clayey marl facies occupying the 9 narrow outflow areas of rivers, and second, the coarse grained, high-carbonate marl facies rich in relict sediment which covers wide sediment-poor areas of the basin bottoms. Sediment transport is for the most part restricted to grain sizes < 150 µ and in shallow water is largely coast-parallel due to wave action at times supplemented by tidal currents. Below the wave base gravity transport prevails. The only current capable of moving sediment is the Persian Gulf outflow water in the Gulf of Oman.
Resumo:
Seven opal-CT-rich and five quartz-rich porcellanites and cherts from Site 504 have a range in oxygen-isotope values of 24.4 and 29.4 per mil. In opal-CT rocks, d18O becomes larger with sub-bottom depth and with age. Quartz-rich rocks do not show these trends. Boron, in general, increases with decreasing d18O for porcellanites and cherts considered together, supporting the conclusion that boron is incorporated within the quartz crystal structure during precipitation of the SiO2. Silicification of the chalks at Site 504 began 1 m.y. ago - that is, 5 m.y. after sedimentation commenced on the oceanic crust. Temperatures of chert formation determined from oxygen-isotope compositions reflect diagenetic temperatures rather than bottom-water temperatures, and are comparable to temperatures of formation determined by down-hole measurements. Opal-A in the chalks began conversion to opal-CT when a temperature of 50°C was reached in the sediment column. Conversion of opal-CT to quartz started at 55 °C. Silicification occurred over a stratigraphic thickness of about 10 meters when the temperature at the top of the 10 meters reached about 50°C. It took about 250,000 years to complete the silica transformation within each 10-meter interval of sediment at Site 504. Quartz formed over a stratigraphic range of at least 30 meters, at temperatures of about 54 to 60°C. The time and temperatures of silicification of Site 504 rocks are more like those at continental margins than those in deep-sea, open-ocean deposits.
Resumo:
The twentieth century brought a new sensibility characterized by the discredit of cartesian rationality and the weakening of universal truths, related with aesthetic values as order, proportion and harmony. In the middle of the century, theorists such as Theodor Adorno, Rudolf Arnheim and Anton Ehrenzweig warned about the transformation developed by the artistic field. Contemporary aesthetics seemed to have a new goal: to deny the idea of art as an organized, finished and coherent structure. The order had lost its privileged position. Disorder, probability, arbitrariness, accidentality, randomness, chaos, fragmentation, indeterminacy... Gradually new terms were coined by aesthetic criticism to explain what had been happening since the beginning of the century. The first essays on the matter sought to provide new interpretative models based on, among other arguments, the phenomenology of perception, the recent discoveries of quantum mechanics, the deeper layers of the psyche or the information theories. Overall, were worthy attempts to give theoretical content to a situation as obvious as devoid of founding charter. Finally, in 1962, Umberto Eco brought together all this efforts by proposing a single theoretical frame in his book Opera Aperta. According to his point of view, all of the aesthetic production of twentieth century had a characteristic in common: its capacity to express multiplicity. For this reason, he considered that the nature of contemporary art was, above all, ambiguous. The aim of this research is to clarify the consequences of the incorporation of ambiguity in architectural theoretical discourse. We should start making an accurate analysis of this concept. However, this task is quite difficult because ambiguity does not allow itself to be clearly defined. This concept has the disadvantage that its signifier is as imprecise as its signified. In addition, the negative connotations that ambiguity still has outside the aesthetic field, stigmatizes this term and makes its use problematic. Another problem of ambiguity is that the contemporary subject is able to locate it in all situations. This means that in addition to distinguish ambiguity in contemporary productions, so does in works belonging to remote ages and styles. For that reason, it could be said that everything is ambiguous. And that’s correct, because somehow ambiguity is present in any creation of the imperfect human being. However, as Eco, Arnheim and Ehrenzweig pointed out, there are two major differences between current and past contexts. One affects the subject and the other the object. First, it’s the contemporary subject, and no other, who has acquired the ability to value and assimilate ambiguity. Secondly, ambiguity was an unexpected aesthetic result in former periods, while in contemporary object it has been codified and is deliberately present. In any case, as Eco did, we consider appropriate the use of the term ambiguity to refer to the contemporary aesthetic field. Any other term with more specific meaning would only show partial and limited aspects of a situation quite complex and difficult to diagnose. Opposed to what normally might be expected, in this case ambiguity is the term that fits better due to its particular lack of specificity. In fact, this lack of specificity is what allows to assign a dynamic condition to the idea of ambiguity that in other terms would hardly be operative. Thus, instead of trying to define the idea of ambiguity, we will analyze how it has evolved and its consequences in architectural discipline. Instead of trying to define what it is, we will examine what its presence has supposed in each moment. We will deal with ambiguity as a constant presence that has always been latent in architectural production but whose nature has been modified over time. Eco, in the mid-twentieth century, discerned between classical ambiguity and contemporary ambiguity. Currently, half a century later, the challenge is to discern whether the idea of ambiguity has remained unchanged or have suffered a new transformation. What this research will demonstrate is that it’s possible to detect a new transformation that has much to do with the cultural and aesthetic context of last decades: the transition from modernism to postmodernism. This assumption leads us to establish two different levels of contemporary ambiguity: each one related to one these periods. The first level of ambiguity is widely well-known since many years. Its main characteristics are a codified multiplicity, an interpretative freedom and an active subject who gives conclusion to an object that is incomplete or indefinite. This level of ambiguity is related to the idea of indeterminacy, concept successfully introduced into contemporary aesthetic language. The second level of ambiguity has been almost unnoticed for architectural criticism, although it has been identified and studied in other theoretical disciplines. Much of the work of Fredric Jameson and François Lyotard shows reasonable evidences that the aesthetic production of postmodernism has transcended modern ambiguity to reach a new level in which, despite of the existence of multiplicity, the interpretative freedom and the active subject have been questioned, and at last denied. In this period ambiguity seems to have reached a new level in which it’s no longer possible to obtain a conclusive and complete interpretation of the object because it has became an unreadable device. The postmodern production offers a kind of inaccessible multiplicity and its nature is deeply contradictory. This hypothetical transformation of the idea of ambiguity has an outstanding analogy with that shown in the poetic analysis made by William Empson, published in 1936 in his Seven Types of Ambiguity. Empson established different levels of ambiguity and classified them according to their poetic effect. This layout had an ascendant logic towards incoherence. In seventh level, where ambiguity is higher, he located the contradiction between irreconcilable opposites. It could be said that contradiction, once it undermines the coherence of the object, was the better way that contemporary aesthetics found to confirm the Hegelian judgment, according to which art would ultimately reject its capacity to express truth. Much of the transformation of architecture throughout last century is related to the active involvement of ambiguity in its theoretical discourse. In modern architecture ambiguity is present afterwards, in its critical review made by theoreticians like Colin Rowe, Manfredo Tafuri and Bruno Zevi. The publication of several studies about Mannerism in the forties and fifties rescued certain virtues of an historical style that had been undervalued due to its deviation from Renacentist canon. Rowe, Tafuri and Zevi, among others, pointed out the similarities between Mannerism and certain qualities of modern architecture, both devoted to break previous dogmas. The recovery of Mannerism allowed joining ambiguity and modernity for first time in the same sentence. In postmodernism, on the other hand, ambiguity is present ex-professo, developing a prominent role in the theoretical discourse of this period. The distance between its analytical identification and its operational use quickly disappeared because of structuralism, an analytical methodology with the aspiration of becoming a modus operandi. Under its influence, architecture began to be identified and studied as a language. Thus, postmodern theoretical project discerned between the components of architectural language and developed them separately. Consequently, there is not only one, but three projects related to postmodern contradiction: semantic project, syntactic project and pragmatic project. Leading these projects are those prominent architects whose work manifested an especial interest in exploring and developing the potential of the use of contradiction in architecture. Thus, Robert Venturi, Peter Eisenman and Rem Koolhaas were who established the main features through which architecture developed the dialectics of ambiguity, in its last and extreme level, as a theoretical project in each component of architectural language. Robert Venturi developed a new interpretation of architecture based on its semantic component, Peter Eisenman did the same with its syntactic component, and also did Rem Koolhaas with its pragmatic component. With this approach this research aims to establish a new reflection on the architectural transformation from modernity to postmodernity. Also, it can serve to light certain aspects still unaware that have shaped the architectural heritage of past decades, consequence of a fruitful relationship between architecture and ambiguity and its provocative consummation in a contradictio in terminis. Esta investigación centra su atención fundamentalmente sobre las repercusiones de la incorporación de la ambigüedad en forma de contradicción en el discurso arquitectónico postmoderno, a través de cada uno de sus tres proyectos teóricos. Está estructurada, por tanto, en torno a un capítulo principal titulado Dialéctica de la ambigüedad como proyecto teórico postmoderno, que se desglosa en tres, de títulos: Proyecto semántico. Robert Venturi; Proyecto sintáctico. Peter Eisenman; y Proyecto pragmático. Rem Koolhaas. El capítulo central se complementa con otros dos situados al inicio. El primero, titulado Dialéctica de la ambigüedad contemporánea. Una aproximación realiza un análisis cronológico de la evolución que ha experimentado la idea de la ambigüedad en la teoría estética del siglo XX, sin entrar aún en cuestiones arquitectónicas. El segundo, titulado Dialéctica de la ambigüedad como crítica del proyecto moderno se ocupa de examinar la paulatina incorporación de la ambigüedad en la revisión crítica de la modernidad, que sería de vital importancia para posibilitar su posterior introducción operativa en la postmodernidad. Un último capítulo, situado al final del texto, propone una serie de Proyecciones que, a tenor de lo analizado en los capítulos anteriores, tratan de establecer una relectura del contexto arquitectónico actual y su evolución posible, considerando, en todo momento, que la reflexión en torno a la ambigüedad todavía hoy permite vislumbrar nuevos horizontes discursivos. Cada doble página de la Tesis sintetiza la estructura tripartita del capítulo central y, a grandes rasgos, la principal herramienta metodológica utilizada en la investigación. De este modo, la triple vertiente semántica, sintáctica y pragmática con que se ha identificado al proyecto teórico postmoderno se reproduce aquí en una distribución específica de imágenes, notas a pie de página y cuerpo principal del texto. En la columna de la izquierda están colocadas las imágenes que acompañan al texto principal. Su distribución atiende a criterios estéticos y compositivos, cualificando, en la medida de lo posible, su condición semántica. A continuación, a su derecha, están colocadas las notas a pie de página. Su disposición es en columna y cada nota está colocada a la misma altura que su correspondiente llamada en el texto principal. Su distribución reglada, su valor como notación y su posible equiparación con una estructura profunda aluden a su condición sintáctica. Finalmente, el cuerpo principal del texto ocupa por completo la mitad derecha de cada doble página. Concebido como un relato continuo, sin apenas interrupciones, su papel como responsable de satisfacer las demandas discursivas que plantea una investigación doctoral está en correspondencia con su condición pragmática.
Resumo:
The advantages of tabled evaluation regarding program termination and reduction of complexity are well known —as are the significant implementation, portability, and maintenance efforts that some proposals (especially those based on suspensión) require. This implementation effort is reduced by program transformation-based continuation cali techniques, at some eñrciency cost. However, the traditional formulation of this proposal by Ramesh and Cheng limits the interleaving of tabled and non-tabled predicates and thus cannot be used as-is for arbitrary programs. In this paper we present a complete translation for the continuation cali technique which, using the runtime support needed for the traditional proposal, solves these problems and makes it possible to execute arbitrary tabled programs. We present performance results which show that CCall offers a useful tradeoff that can be competitive with state-of-the-art implementations.
Resumo:
Tabled evaluation has been proved an effective method to improve several aspeets of goal-oriented query evaluation, including termination and complexity. Several "native" implementations of tabled evaluation have been developed which offer good performance, but many of them need significant changes to the underlying Prolog implementation. More portable approaches, generally using program transformation, have been proposed but they often result in lower efficieney. We explore some techniques aimed at combining the best of these worlds, i.e., developing a portable and extensible implementation, with minimal modifications at the abstract machine level, and with reasonably good performance. Our preliminary results indícate promising results.
Resumo:
The advantages of tabled evaluation regarding program termination and reduction of complexity are well known —as are the significant implementation, portability, and maintenance efforts that some proposals (especially those based on suspension) require. This implementation effort is reduced by program transformation-based continuation call techniques, at some efficiency cost. However, the traditional formulation of this proposal by Ramesh and Cheng limits the interleaving of tabled and non-tabled predicates and thus cannot be used as-is for arbitrary programs. In this paper we present a complete translation for the continuation call technique which, using the runtime support needed for the traditional proposal, solves these problems and makes it possible to execute arbitrary tabled programs. We present performance results which show that CCall offers a useful tradeoff that can be competitive with state-of-the-art implementations.
Resumo:
Fundamento de la tesis: Al comienzo del siglo XX, el interés por el turismo unido a la necesidad de restaurar un abundante patrimonio histórico, posibilitó en España que los poderes públicos se embarcaran en una singular experiencia: la creación de una infraestructura hotelera a partir de la rehabilitación de edificios históricos. La preservación, mantenimiento e incluso rentabilidad de una gran parte del patrimonio español se haría efectiva a través de la innovadora fórmula patrimonio‐turismo, cuya máxima expresión se materializó en la Red de Paradores desde su fundación en la segunda década del siglo pasado hasta nuestros días. Sorprendentemente, este tema no ha sido todavía investigado en su vertiente arquitectónica pese a que España ha sido pionera y modelo en la cuestión de la hotelería pública. Este trabajo aborda el estudio del caso más significativo de todos los edificios de la red, en tanto que el patrimonio que ha servido de base a los fines hoteleros del Estado ha contado con un total de seis tipos arquitectónicos a lo largo de su historia, dentro de los cuales la arquitectura militar despunta con su mayoritaria presencia dentro del contexto de los edificios históricos de la red. El carácter arquetípico de los castillos y fortalezas, arraigado en el inconsciente colectivo, les hizo especialmente atractivos como alojamiento turístico al permitir evocar la remota época medieval, pese a ser el tipo arquitectónico más comprometido para la rehabilitación hotelera. El estudio de las intervenciones operadas en estos inmuebles se revela de forma clara como escaparate de los distintos criterios de intervención patrimonial que se han sucedido en el siglo XX, hasta enlazar con la perspectiva interdisciplinar actual. La tesis abarca en, primer lugar, diferentes aspectos generales relativos al promotor hotelero, la hotelería pública de ámbito nacional e internacional, y la caracterización de los inmuebles de la red estatal española, desde el punto de vista hotelero y arquitectónico, entendida esta última en sus tres escalas de influencia: la arquitectónica, la urbana o paisajística, y la del interiorismo. Se analiza en segundo término la arquitectura militar dentro del contexto de la Red de Paradores, desde la consideración de su transformación hotelera, para lo cual ha sido necesario realizar una clasificación propia, que abarca tanto edificios que respondieron a una estructura de cuartel, como castillos‐palacio, o fortalezas que habían servido a los fines de una orden religiosa militar, además de considerarse las intervenciones en recintos históricos de carácter militar, donde se hacía obligatorio construir de nueva planta. En tercer y último lugar, se analiza a lo largo de las distintas etapas del organismo turístico las rehabilitaciones realizadas en estas construcciones militares, a la vez que se tienen en cuenta las intervenciones en los restantes edificios históricos, para evitar la descontextualización. Este recorrido comienza con la promoción de los dos primeros paradores a cargo del Comisario Regio, el marqués de la Vega‐Inclán, que sirvieron para sentar las bases de los conceptos e ideas que habrían de desarrollarse en las siguientes décadas. Posteriormente, se desarrolló y tomó forma la red con el Patronato Nacional del Turismo, en la que las primeras intervenciones en tipos militares se tradujeron en reformas interiores de locales. La etapa clave de la red, y en particular de la arquitectura militar, tuvo lugar con el Ministerio de Información y Turismo, marcada por la “repristinación” de monumentos, tras un período preparatorio con la Dirección General del Turismo en el que lo militar había quedado de telón de fondo de otros tipos arquitectónicos. Tras el auge del Ministerio llegó el período de decadencia en el que los castillos y fortalezas desaparecieron de los intereses de las Secretarias de Turismo, hasta llegar a las inauguraciones de los novedosos establecimientos del siglo XXI y el resurgimiento del tipo militar con el parador de Lorca. Metodología empleada: Este trabajo de investigación se ha servido fundamentalmente de documentación inédita, procedente de diversos archivos, además de una muy extensa toma de datos in situ. Dentro del patrimonio analizado, los inmuebles que responden al tipo arquitectónico militar se han dividido en tres grandes grupos: inmuebles rehabilitados que entraron en funcionamiento en la red, inmuebles en proceso de transformación hotelera, e inmuebles que fueron adquiridos con fines hoteleros pero que no llegaron a rehabilitarse. Para cada uno de ellos ha sido necesario determinar en qué estado llegaron a manos de la Administración Turística, cuál fue el mecanismo a través del cual se adquirieron, en qué consistió su primera rehabilitación hotelera, y cuáles fueron las ampliaciones o reformas más significativas que se realizaron posteriormente. Estos datos se han sintetizado en fichas y se han extraído conclusiones al comparar cada unidad con el conjunto. Simultáneamente se introdujeron dos factores externos: la historia del turismo que permitió hacer una ordenación cronológica de los inmuebles según etapas, y la historia de la teoría y práctica de la intervención patrimonial en España que permitió comparar los criterios patrimoniales de la Administración competente respecto de las intervenciones de la Administración Turística, cuyo contacto se haría obligatorio a partir del Decreto, de 22 de abril de 1949, que dejaba bajo la tutela del Estado a todos los castillos y fortalezas. Aportación de la tesis: Con carácter general, la tesis centra una ordenación y sistematización completa del patrimonio inmobiliario de la red, desde el punto de vista de los tipos hoteleros y arquitectónicos, además de poner por primera vez en conexión distintos modelos de hotelería pública, para constituirse en el sustrato de futuras investigaciones. El estudio realizado se ha hecho extensivo a las distintas escalas que inciden de forma interconectada en la implantación de un parador: la arquitectónica, la urbana y la del interiorismo, hasta ahora referenciado desde la exclusiva visión arquitectónica. Se han definido las etapas de la historia de la red, no ya sólo a partir del hilo conductor de la cadena sucesiva de organismos turísticos, sino que por primera vez se hace en razón de la evolución que sufren las intervenciones patrimoniales a lo largo del tiempo, a la vez que se entra en conexión con la teoría y praxis de la restauración monumental. Con carácter particular, la arquitectura militar dentro del contexto de los paradores se destaca en el período del Ministerio, en el que se experimentaron todas las posibilidades que presentaba su rehabilitación. En este sentido se ha puesto de manifiesto en este trabajo un tipo híbrido de parador, a caballo entre la rehabilitación y la edificación de nueva planta, las dos formas básicas de establecimiento creadas en la Comisaría Regia, al que se ha denominado edificación de nueva planta en recinto histórico militar. Esta nueva caracterización se ha valorado como la forma más eficiente de implantar paradores, cuyas pautas arquitectónicas abarcaron un abanico de posibilidades: imitación de modelos arquitectónicos históricos con utilización de elementos patrimoniales prestados que dieran el valor de la historia, utilización de un lenguaje moderno, o la inspiración en la arquitectura vernácula. La amalgama de elementos, estilos e intervenciones sucesivas de ampliación fue la característica común tanto para la implantación de un parador en un edificio como en un recinto amurallado. La arquitectura militar transformada en establecimiento hotelero evidencia la vocación escenográfica de las intervenciones patrimoniales, secundada por el interiorismo, además de su aportación a la arquitectura hotelera en lo referente al confort, organización y funcionamiento de sus instalaciones. La tesis ahonda en los diversos aspectos de la rehabilitación hotelera apuntados de forma parcial por algunos autores, y pone de manifiesto la “ambientación medieval” operada en la arquitectura militar, que llegó a tener su máxima expresión con el criterio de la “unidad de estilo” del Ministerio de Información y Turismo. La rehabilitación hotelera dentro del contexto de la Red de Paradores, queda caracterizada en la tesis en relación a intervenciones en construcciones militares, cuya sistematización puede ser extrapolable a otros tipos arquitectónicos o cadenas hoteleras de titularidad pública, a partir del estudio que se ha avanzado en este trabajo. Thesis basis: At the beginning of the 20th century the interest in tourism added to the plentiful heritage in Spain enabled the authorities to embark on a singular experience: the creation of a hotel infrastructure from the restoration of historic buildings. Preservation, maintenance, and even profitability of a large part of the Spanish heritage would be effective through the innovative formula heritage-tourism. Its greatest expression materialized in the Paradores Network since its foundation in last century’s second decade to the present day. Surprisingly, this subject has not yet been investigated in its architectural aspect, even though Spain has been a pioneer and a model in the matter of public hotel business. This project tackles the study of the most significative case of all the network’s buildings, since the heritage which has served throughout history as a base for the State hotel purposes has altogether six architectural types, among which military architecture stands out with its majority presence in the context of the historical buildings of the network. The archetypal character of castles and fortresses, ingrained in the collective subconscious, made them specially attractive for tourist accommodation, as it allowed the evocation of far medieval times, despite being the most awkward architectural type for hotel restoration. The study of the interventions in these buildings clearly reveals itself as a showcase of the different criteria of heritage intervention along the 20th century, connecting to the present interdisciplinary perspective. Firstly, the thesis covers different general aspects regarding the hotel developer, the domestic and international public hotel business, and the description of the Spanish state network buildings from a hotel business and an architectural point of view, the latter from its three influence scales: architectural, urban or landscape, and interior design. Secondly, the transformation of the military architecture in the Paradores Network into hotels is analyzed. For that purpose it was necessary to create a specific classification, which included barrack-structured buildings, castle-palaces, or fortresses which served the purposes of military-religious orders. The interventions in those military historical places where new building became compulsory were also taken into consideration. Thirdly and lastly, the thesis analyses the restorations in these military constructions through the different stages of the tourist organization. In order to avoid decontextualization, interventions in other historical buildings were also considered. This route begins with the promotion of the two first Paradores by the Royal Commissioner, the marquis of Vega-Inclán, which paved the way for the concepts and ideas that were developed in the following decades. Subsequently, the network was developed and took shape with the National Tourism Board. The first interventions on military types were inside refurbishments. The Network’s key period, and in particular of its military architecture, took place with the Ministry of Information and Tourism, a time marked by the “restoration to its original state” of monuments. This stage arrived after a preparatory period with the State Tourist Office, when the military type was left as a backdrop for other architectural types. After the Ministry’s boom arrived a decline, in which castles and fortresses disappeared from the Tourist Department’s interests up to the opening of the 21st century new establishments and the resurgence of the military type with Lorca’s Parador. Methodology: The present research project has mainly used unpublished documentation from several archives and has done an extensive in situ data-gathering. Within the heritage analyzed, military buildings have been divided into three main groups: restored buildings that began to operate in the network, those in process of hotel transformation, and those acquired for hotel purposes, but which did not become restored. In each case, it has been necessary to determine the condition in which they arrived to the Tourist Administration, the procedure by which they were acquired, what their first hotel restoration consisted of, and which their subsequent most significative enlargements and alterations were. These facts have been synthesized in cards, and conclusions were drawn by comparing each unit with the whole. Simultaneously, two external factors were introduced: the history of tourism, that allowed establishing a chronological order according to different periods, and the history of Spanish heritage intervention’s theory and practice, that permitted to compare the heritage criteria from the competent Administration with those of the Tourist Administration’s interventions. Both Administrations came compulsorily into contact after the Decree of 22nd April 1949, by which all castles and fortresses became under the protection of the State. Thesis contribution: In general, the thesis focuses on a complete order and systematization of the network’s heritage buildings from the hotel and architectural types points of view, besides connecting for the first time different public hotel business models, becoming the substratum for future investigations. The study has included the different scales that impact interconnected on the establishment of a Parador: architectural, urban and interior design, only referenced to date from an architectural point of view. The Network’s history stages have been defined according to not only a consecutive series of tourist organizations, but also, and for the first time, to the evolution of heritage interventions over time, thus connecting with the theory and praxis of monumental restoration. In particular, within the Paradores, military architecture stands out in the Ministry’s period, in which all kind of restoration possibilities were explored. In this sense, the present project puts forth a hybrid type of Parador between restoration and new building, the two basic ways of establishment created in the Royal Commission, termed new building in military historic enclosure. This new characterization has been evaluated as the most efficient for establishing Paradores, whose architectonic guidelines include a wide range of possibilities: the imitation of historical architectonic models with use of borrowed heritage components that provide historical value, the use of modern language, or the inspiration in vernacular architecture. The amalgam of elements, styles and consecutive enlargement interventions was the common feature of the establishment of a Parador, both in a building or in a walled enclosure. The military architecture transformed into a hotel establishment gives proof of the scenographic vocation of heritage interventions, supported by interior design, as well as of its contribution to hotel architecture, related to its comfort, organization and the functioning of its facilities. The thesis delves into the diverse aspects of hotel restoration, partially pointed out by several authors, and puts forth the creation of a “medieval atmosphere” in military architecture, which came to its highest expression with the “unitary style” criteria of the Ministry of Information and Tourism. Hotel restoration within the context of the Paradores’ Network is defined in this thesis in relation to interventions in military constructions, whose systemization can be extrapolative to other architectural types or public hotel chains, based on the study which has been put forward in this project.
Resumo:
The International Standard ISO 140-5 on field measurements of airborne sound insulation of façades establishes that the directivity of the measurement loudspeaker should be such that the variation in the local direct sound pressure level (ΔSPL) on the sample is ΔSPL < 5 dB (or ΔSPL < 10 dB for large façades). This condition is usually not very easy to accomplish nor is it easy to verify whether the loudspeaker produces such a uniform level. Direct sound pressure levels on the ISO standard façade essentially depend on the distance and directivity of the loudspeaker used. This paper presents a comprehensive analysis of the test geometry for measuring sound insulation and explains how the loudspeaker directivity, combined with distance, affects the acoustic level distribution on the façade. The first sections of the paper are focused on analysing the measurement geometry and its influence on the direct acoustic level variations on the façade. The most favourable and least favourable positions to minimise these direct acoustic level differences are found, and the angles covered by the façade in the reference system of the loudspeaker are also determined. Then, the maximum dimensions of the façade that meet the conditions of the ISO 140-5 standard are obtained for the ideal omnidirectional sound source and the piston radiating in an infinite baffle, which is chosen as the typical radiation pattern for loudspeakers. Finally, a complete study of the behaviour of different loudspeaker radiation models (such as those usually utilised in the ISO 140-5 measurements) is performed, comparing their radiation maps on the façade for searching their maximum dimensions and the most appropriate radiation configurations.
Resumo:
Software Product Line Engineering has significant advantages in family-based software development. The common and variable structure for all products of a family is defined through a Product-Line Architecture (PLA) that consists of a common set of reusable components and connectors which can be configured to build the different products. The design of PLA requires solutions for capturing such configuration (variability). The Flexible-PLA Model is a solution that supports the specification of external variability of the PLA configuration, as well as internal variability of components. However, a complete support for product-line development requires translating architecture specifications into code. This complex task needs automation to avoid human error. Since Model-Driven Development allows automatic code generation from models, this paper presents a solution to automatically generate AspectJ code from Flexible-PLA models previously configured to derive specific products. This solution is supported by a modeling framework and validated in a software factory.
Resumo:
Esta tesis doctoral se centra principalmente en técnicas de ataque y contramedidas relacionadas con ataques de canal lateral (SCA por sus siglas en inglés), que han sido propuestas dentro del campo de investigación académica desde hace 17 años. Las investigaciones relacionadas han experimentado un notable crecimiento en las últimas décadas, mientras que los diseños enfocados en la protección sólida y eficaz contra dichos ataques aún se mantienen como un tema de investigación abierto, en el que se necesitan iniciativas más confiables para la protección de la información persona de empresa y de datos nacionales. El primer uso documentado de codificación secreta se remonta a alrededor de 1700 B.C., cuando los jeroglíficos del antiguo Egipto eran descritos en las inscripciones. La seguridad de la información siempre ha supuesto un factor clave en la transmisión de datos relacionados con inteligencia diplomática o militar. Debido a la evolución rápida de las técnicas modernas de comunicación, soluciones de cifrado se incorporaron por primera vez para garantizar la seguridad, integridad y confidencialidad de los contextos de transmisión a través de cables sin seguridad o medios inalámbricos. Debido a las restricciones de potencia de cálculo antes de la era del ordenador, la técnica de cifrado simple era un método más que suficiente para ocultar la información. Sin embargo, algunas vulnerabilidades algorítmicas pueden ser explotadas para restaurar la regla de codificación sin mucho esfuerzo. Esto ha motivado nuevas investigaciones en el área de la criptografía, con el fin de proteger el sistema de información ante sofisticados algoritmos. Con la invención de los ordenadores se ha acelerado en gran medida la implementación de criptografía segura, que ofrece resistencia eficiente encaminada a obtener mayores capacidades de computación altamente reforzadas. Igualmente, sofisticados cripto-análisis han impulsado las tecnologías de computación. Hoy en día, el mundo de la información ha estado involucrado con el campo de la criptografía, enfocada a proteger cualquier campo a través de diversas soluciones de cifrado. Estos enfoques se han fortalecido debido a la unificación optimizada de teorías matemáticas modernas y prácticas eficaces de hardware, siendo posible su implementación en varias plataformas (microprocesador, ASIC, FPGA, etc.). Las necesidades y requisitos de seguridad en la industria son las principales métricas de conducción en el diseño electrónico, con el objetivo de promover la fabricación de productos de gran alcance sin sacrificar la seguridad de los clientes. Sin embargo, una vulnerabilidad en la implementación práctica encontrada por el Prof. Paul Kocher, et al en 1996 implica que un circuito digital es inherentemente vulnerable a un ataque no convencional, lo cual fue nombrado posteriormente como ataque de canal lateral, debido a su fuente de análisis. Sin embargo, algunas críticas sobre los algoritmos criptográficos teóricamente seguros surgieron casi inmediatamente después de este descubrimiento. En este sentido, los circuitos digitales consisten típicamente en un gran número de celdas lógicas fundamentales (como MOS - Metal Oxide Semiconductor), construido sobre un sustrato de silicio durante la fabricación. La lógica de los circuitos se realiza en función de las innumerables conmutaciones de estas células. Este mecanismo provoca inevitablemente cierta emanación física especial que puede ser medida y correlacionada con el comportamiento interno del circuito. SCA se puede utilizar para revelar datos confidenciales (por ejemplo, la criptografía de claves), analizar la arquitectura lógica, el tiempo e incluso inyectar fallos malintencionados a los circuitos que se implementan en sistemas embebidos, como FPGAs, ASICs, o tarjetas inteligentes. Mediante el uso de la comparación de correlación entre la cantidad de fuga estimada y las fugas medidas de forma real, información confidencial puede ser reconstruida en mucho menos tiempo y computación. Para ser precisos, SCA básicamente cubre una amplia gama de tipos de ataques, como los análisis de consumo de energía y radiación ElectroMagnética (EM). Ambos se basan en análisis estadístico y, por lo tanto, requieren numerosas muestras. Los algoritmos de cifrado no están intrínsecamente preparados para ser resistentes ante SCA. Es por ello que se hace necesario durante la implementación de circuitos integrar medidas que permitan camuflar las fugas a través de "canales laterales". Las medidas contra SCA están evolucionando junto con el desarrollo de nuevas técnicas de ataque, así como la continua mejora de los dispositivos electrónicos. Las características físicas requieren contramedidas sobre la capa física, que generalmente se pueden clasificar en soluciones intrínsecas y extrínsecas. Contramedidas extrínsecas se ejecutan para confundir la fuente de ataque mediante la integración de ruido o mala alineación de la actividad interna. Comparativamente, las contramedidas intrínsecas están integradas en el propio algoritmo, para modificar la aplicación con el fin de minimizar las fugas medibles, o incluso hacer que dichas fugas no puedan ser medibles. Ocultación y Enmascaramiento son dos técnicas típicas incluidas en esta categoría. Concretamente, el enmascaramiento se aplica a nivel algorítmico, para alterar los datos intermedios sensibles con una máscara de manera reversible. A diferencia del enmascaramiento lineal, las operaciones no lineales que ampliamente existen en criptografías modernas son difíciles de enmascarar. Dicho método de ocultación, que ha sido verificado como una solución efectiva, comprende principalmente la codificación en doble carril, que está ideado especialmente para aplanar o eliminar la fuga dependiente de dato en potencia o en EM. En esta tesis doctoral, además de la descripción de las metodologías de ataque, se han dedicado grandes esfuerzos sobre la estructura del prototipo de la lógica propuesta, con el fin de realizar investigaciones enfocadas a la seguridad sobre contramedidas de arquitectura a nivel lógico. Una característica de SCA reside en el formato de las fuentes de fugas. Un típico ataque de canal lateral se refiere al análisis basado en la potencia, donde la capacidad fundamental del transistor MOS y otras capacidades parásitas son las fuentes esenciales de fugas. Por lo tanto, una lógica robusta resistente a SCA debe eliminar o mitigar las fugas de estas micro-unidades, como las puertas lógicas básicas, los puertos I/O y las rutas. Las herramientas EDA proporcionadas por los vendedores manipulan la lógica desde un nivel más alto, en lugar de realizarlo desde el nivel de puerta, donde las fugas de canal lateral se manifiestan. Por lo tanto, las implementaciones clásicas apenas satisfacen estas necesidades e inevitablemente atrofian el prototipo. Por todo ello, la implementación de un esquema de diseño personalizado y flexible ha de ser tomado en cuenta. En esta tesis se presenta el diseño y la implementación de una lógica innovadora para contrarrestar SCA, en la que se abordan 3 aspectos fundamentales: I. Se basa en ocultar la estrategia sobre el circuito en doble carril a nivel de puerta para obtener dinámicamente el equilibrio de las fugas en las capas inferiores; II. Esta lógica explota las características de la arquitectura de las FPGAs, para reducir al mínimo el gasto de recursos en la implementación; III. Se apoya en un conjunto de herramientas asistentes personalizadas, incorporadas al flujo genérico de diseño sobre FPGAs, con el fin de manipular los circuitos de forma automática. El kit de herramientas de diseño automático es compatible con la lógica de doble carril propuesta, para facilitar la aplicación práctica sobre la familia de FPGA del fabricante Xilinx. En este sentido, la metodología y las herramientas son flexibles para ser extendido a una amplia gama de aplicaciones en las que se desean obtener restricciones mucho más rígidas y sofisticadas a nivel de puerta o rutado. En esta tesis se realiza un gran esfuerzo para facilitar el proceso de implementación y reparación de lógica de doble carril genérica. La viabilidad de las soluciones propuestas es validada mediante la selección de algoritmos criptográficos ampliamente utilizados, y su evaluación exhaustiva en comparación con soluciones anteriores. Todas las propuestas están respaldadas eficazmente a través de ataques experimentales con el fin de validar las ventajas de seguridad del sistema. El presente trabajo de investigación tiene la intención de cerrar la brecha entre las barreras de implementación y la aplicación efectiva de lógica de doble carril. En esencia, a lo largo de esta tesis se describirá un conjunto de herramientas de implementación para FPGAs que se han desarrollado para trabajar junto con el flujo de diseño genérico de las mismas, con el fin de lograr crear de forma innovadora la lógica de doble carril. Un nuevo enfoque en el ámbito de la seguridad en el cifrado se propone para obtener personalización, automatización y flexibilidad en el prototipo de circuito de bajo nivel con granularidad fina. Las principales contribuciones del presente trabajo de investigación se resumen brevemente a continuación: Lógica de Precharge Absorbed-DPL logic: El uso de la conversión de netlist para reservar LUTs libres para ejecutar la señal de precharge y Ex en una lógica DPL. Posicionamiento entrelazado Row-crossed con pares idénticos de rutado en redes de doble carril, lo que ayuda a aumentar la resistencia frente a la medición EM selectiva y mitigar los impactos de las variaciones de proceso. Ejecución personalizada y herramientas de conversión automática para la generación de redes idénticas para la lógica de doble carril propuesta. (a) Para detectar y reparar conflictos en las conexiones; (b) Detectar y reparar las rutas asimétricas. (c) Para ser utilizado en otras lógicas donde se requiere un control estricto de las interconexiones en aplicaciones basadas en Xilinx. Plataforma CPA de pruebas personalizadas para el análisis de EM y potencia, incluyendo la construcción de dicha plataforma, el método de medición y análisis de los ataques. Análisis de tiempos para cuantificar los niveles de seguridad. División de Seguridad en la conversión parcial de un sistema de cifrado complejo para reducir los costes de la protección. Prueba de concepto de un sistema de calefacción auto-adaptativo para mitigar los impactos eléctricos debido a la variación del proceso de silicio de manera dinámica. La presente tesis doctoral se encuentra organizada tal y como se detalla a continuación: En el capítulo 1 se abordan los fundamentos de los ataques de canal lateral, que abarca desde conceptos básicos de teoría de modelos de análisis, además de la implementación de la plataforma y la ejecución de los ataques. En el capítulo 2 se incluyen las estrategias de resistencia SCA contra los ataques de potencia diferencial y de EM. Además de ello, en este capítulo se propone una lógica en doble carril compacta y segura como contribución de gran relevancia, así como también se presentará la transformación lógica basada en un diseño a nivel de puerta. Por otra parte, en el Capítulo 3 se abordan los desafíos relacionados con la implementación de lógica en doble carril genérica. Así mismo, se describirá un flujo de diseño personalizado para resolver los problemas de aplicación junto con una herramienta de desarrollo automático de aplicaciones propuesta, para mitigar las barreras de diseño y facilitar los procesos. En el capítulo 4 se describe de forma detallada la elaboración e implementación de las herramientas propuestas. Por otra parte, la verificación y validaciones de seguridad de la lógica propuesta, así como un sofisticado experimento de verificación de la seguridad del rutado, se describen en el capítulo 5. Por último, un resumen de las conclusiones de la tesis y las perspectivas como líneas futuras se incluyen en el capítulo 6. Con el fin de profundizar en el contenido de la tesis doctoral, cada capítulo se describe de forma más detallada a continuación: En el capítulo 1 se introduce plataforma de implementación hardware además las teorías básicas de ataque de canal lateral, y contiene principalmente: (a) La arquitectura genérica y las características de la FPGA a utilizar, en particular la Xilinx Virtex-5; (b) El algoritmo de cifrado seleccionado (un módulo comercial Advanced Encryption Standard (AES)); (c) Los elementos esenciales de los métodos de canal lateral, que permiten revelar las fugas de disipación correlacionadas con los comportamientos internos; y el método para recuperar esta relación entre las fluctuaciones físicas en los rastros de canal lateral y los datos internos procesados; (d) Las configuraciones de las plataformas de pruebas de potencia / EM abarcadas dentro de la presente tesis. El contenido de esta tesis se amplia y profundiza a partir del capítulo 2, en el cual se abordan varios aspectos claves. En primer lugar, el principio de protección de la compensación dinámica de la lógica genérica de precarga de doble carril (Dual-rail Precharge Logic-DPL) se explica mediante la descripción de los elementos compensados a nivel de puerta. En segundo lugar, la lógica PA-DPL es propuesta como aportación original, detallando el protocolo de la lógica y un caso de aplicación. En tercer lugar, dos flujos de diseño personalizados se muestran para realizar la conversión de doble carril. Junto con ello, se aclaran las definiciones técnicas relacionadas con la manipulación por encima de la netlist a nivel de LUT. Finalmente, una breve discusión sobre el proceso global se aborda en la parte final del capítulo. El Capítulo 3 estudia los principales retos durante la implementación de DPLs en FPGAs. El nivel de seguridad de las soluciones de resistencia a SCA encontradas en el estado del arte se ha degenerado debido a las barreras de implantación a través de herramientas EDA convencionales. En el escenario de la arquitectura FPGA estudiada, se discuten los problemas de los formatos de doble carril, impactos parásitos, sesgo tecnológico y la viabilidad de implementación. De acuerdo con estas elaboraciones, se plantean dos problemas: Cómo implementar la lógica propuesta sin penalizar los niveles de seguridad, y cómo manipular un gran número de celdas y automatizar el proceso. El PA-DPL propuesto en el capítulo 2 se valida con una serie de iniciativas, desde características estructurales como doble carril entrelazado o redes de rutado clonadas, hasta los métodos de aplicación tales como las herramientas de personalización y automatización de EDA. Por otra parte, un sistema de calefacción auto-adaptativo es representado y aplicado a una lógica de doble núcleo, con el fin de ajustar alternativamente la temperatura local para equilibrar los impactos negativos de la variación del proceso durante la operación en tiempo real. El capítulo 4 se centra en los detalles de la implementación del kit de herramientas. Desarrollado sobre una API third-party, el kit de herramientas personalizado es capaz de manipular los elementos de la lógica de circuito post P&R ncd (una versión binaria ilegible del xdl) convertido al formato XDL Xilinx. El mecanismo y razón de ser del conjunto de instrumentos propuestos son cuidadosamente descritos, que cubre la detección de enrutamiento y los enfoques para la reparación. El conjunto de herramientas desarrollado tiene como objetivo lograr redes de enrutamiento estrictamente idénticos para la lógica de doble carril, tanto para posicionamiento separado como para el entrelazado. Este capítulo particularmente especifica las bases técnicas para apoyar las implementaciones en los dispositivos de Xilinx y su flexibilidad para ser utilizado sobre otras aplicaciones. El capítulo 5 se enfoca en la aplicación de los casos de estudio para la validación de los grados de seguridad de la lógica propuesta. Se discuten los problemas técnicos detallados durante la ejecución y algunas nuevas técnicas de implementación. (a) Se discute el impacto en el proceso de posicionamiento de la lógica utilizando el kit de herramientas propuesto. Diferentes esquemas de implementación, tomando en cuenta la optimización global en seguridad y coste, se verifican con los experimentos con el fin de encontrar los planes de posicionamiento y reparación optimizados; (b) las validaciones de seguridad se realizan con los métodos de correlación y análisis de tiempo; (c) Una táctica asintótica se aplica a un núcleo AES sobre BCDL estructurado para validar de forma sofisticada el impacto de enrutamiento sobre métricas de seguridad; (d) Los resultados preliminares utilizando el sistema de calefacción auto-adaptativa sobre la variación del proceso son mostrados; (e) Se introduce una aplicación práctica de las herramientas para un diseño de cifrado completa. Capítulo 6 incluye el resumen general del trabajo presentado dentro de esta tesis doctoral. Por último, una breve perspectiva del trabajo futuro se expone, lo que puede ampliar el potencial de utilización de las contribuciones de esta tesis a un alcance más allá de los dominios de la criptografía en FPGAs. ABSTRACT This PhD thesis mainly concentrates on countermeasure techniques related to the Side Channel Attack (SCA), which has been put forward to academic exploitations since 17 years ago. The related research has seen a remarkable growth in the past decades, while the design of solid and efficient protection still curiously remain as an open research topic where more reliable initiatives are required for personal information privacy, enterprise and national data protections. The earliest documented usage of secret code can be traced back to around 1700 B.C., when the hieroglyphs in ancient Egypt are scribed in inscriptions. Information security always gained serious attention from diplomatic or military intelligence transmission. Due to the rapid evolvement of modern communication technique, crypto solution was first incorporated by electronic signal to ensure the confidentiality, integrity, availability, authenticity and non-repudiation of the transmitted contexts over unsecure cable or wireless channels. Restricted to the computation power before computer era, simple encryption tricks were practically sufficient to conceal information. However, algorithmic vulnerabilities can be excavated to restore the encoding rules with affordable efforts. This fact motivated the development of modern cryptography, aiming at guarding information system by complex and advanced algorithms. The appearance of computers has greatly pushed forward the invention of robust cryptographies, which efficiently offers resistance relying on highly strengthened computing capabilities. Likewise, advanced cryptanalysis has greatly driven the computing technologies in turn. Nowadays, the information world has been involved into a crypto world, protecting any fields by pervasive crypto solutions. These approaches are strong because of the optimized mergence between modern mathematical theories and effective hardware practices, being capable of implement crypto theories into various platforms (microprocessor, ASIC, FPGA, etc). Security needs from industries are actually the major driving metrics in electronic design, aiming at promoting the construction of systems with high performance without sacrificing security. Yet a vulnerability in practical implementation found by Prof. Paul Kocher, et al in 1996 implies that modern digital circuits are inherently vulnerable to an unconventional attack approach, which was named as side-channel attack since then from its analysis source. Critical suspicions to theoretically sound modern crypto algorithms surfaced almost immediately after this discovery. To be specifically, digital circuits typically consist of a great number of essential logic elements (as MOS - Metal Oxide Semiconductor), built upon a silicon substrate during the fabrication. Circuit logic is realized relying on the countless switch actions of these cells. This mechanism inevitably results in featured physical emanation that can be properly measured and correlated with internal circuit behaviors. SCAs can be used to reveal the confidential data (e.g. crypto-key), analyze the logic architecture, timing and even inject malicious faults to the circuits that are implemented in hardware system, like FPGA, ASIC, smart Card. Using various comparison solutions between the predicted leakage quantity and the measured leakage, secrets can be reconstructed at much less expense of time and computation. To be precisely, SCA basically encloses a wide range of attack types, typically as the analyses of power consumption or electromagnetic (EM) radiation. Both of them rely on statistical analyses, and hence require a number of samples. The crypto algorithms are not intrinsically fortified with SCA-resistance. Because of the severity, much attention has to be taken into the implementation so as to assemble countermeasures to camouflage the leakages via "side channels". Countermeasures against SCA are evolving along with the development of attack techniques. The physical characteristics requires countermeasures over physical layer, which can be generally classified into intrinsic and extrinsic vectors. Extrinsic countermeasures are executed to confuse the attacker by integrating noise, misalignment to the intra activities. Comparatively, intrinsic countermeasures are built into the algorithm itself, to modify the implementation for minimizing the measurable leakage, or making them not sensitive any more. Hiding and Masking are two typical techniques in this category. Concretely, masking applies to the algorithmic level, to alter the sensitive intermediate values with a mask in reversible ways. Unlike the linear masking, non-linear operations that widely exist in modern cryptographies are difficult to be masked. Approved to be an effective counter solution, hiding method mainly mentions dual-rail logic, which is specially devised for flattening or removing the data-dependent leakage in power or EM signatures. In this thesis, apart from the context describing the attack methodologies, efforts have also been dedicated to logic prototype, to mount extensive security investigations to countermeasures on logic-level. A characteristic of SCA resides on the format of leak sources. Typical side-channel attack concerns the power based analysis, where the fundamental capacitance from MOS transistors and other parasitic capacitances are the essential leak sources. Hence, a robust SCA-resistant logic must eliminate or mitigate the leakages from these micro units, such as basic logic gates, I/O ports and routings. The vendor provided EDA tools manipulate the logic from a higher behavioral-level, rather than the lower gate-level where side-channel leakage is generated. So, the classical implementations barely satisfy these needs and inevitably stunt the prototype. In this case, a customized and flexible design scheme is appealing to be devised. This thesis profiles an innovative logic style to counter SCA, which mainly addresses three major aspects: I. The proposed logic is based on the hiding strategy over gate-level dual-rail style to dynamically overbalance side-channel leakage from lower circuit layer; II. This logic exploits architectural features of modern FPGAs, to minimize the implementation expenses; III. It is supported by a set of assistant custom tools, incorporated by the generic FPGA design flow, to have circuit manipulations in an automatic manner. The automatic design toolkit supports the proposed dual-rail logic, facilitating the practical implementation on Xilinx FPGA families. While the methodologies and the tools are flexible to be expanded to a wide range of applications where rigid and sophisticated gate- or routing- constraints are desired. In this thesis a great effort is done to streamline the implementation workflow of generic dual-rail logic. The feasibility of the proposed solutions is validated by selected and widely used crypto algorithm, for thorough and fair evaluation w.r.t. prior solutions. All the proposals are effectively verified by security experiments. The presented research work attempts to solve the implementation troubles. The essence that will be formalized along this thesis is that a customized execution toolkit for modern FPGA systems is developed to work together with the generic FPGA design flow for creating innovative dual-rail logic. A method in crypto security area is constructed to obtain customization, automation and flexibility in low-level circuit prototype with fine-granularity in intractable routings. Main contributions of the presented work are summarized next: Precharge Absorbed-DPL logic: Using the netlist conversion to reserve free LUT inputs to execute the Precharge and Ex signal in a dual-rail logic style. A row-crossed interleaved placement method with identical routing pairs in dual-rail networks, which helps to increase the resistance against selective EM measurement and mitigate the impacts from process variations. Customized execution and automatic transformation tools for producing identical networks for the proposed dual-rail logic. (a) To detect and repair the conflict nets; (b) To detect and repair the asymmetric nets. (c) To be used in other logics where strict network control is required in Xilinx scenario. Customized correlation analysis testbed for EM and power attacks, including the platform construction, measurement method and attack analysis. A timing analysis based method for quantifying the security grades. A methodology of security partitions of complex crypto systems for reducing the protection cost. A proof-of-concept self-adaptive heating system to mitigate electrical impacts over process variations in dynamic dual-rail compensation manner. The thesis chapters are organized as follows: Chapter 1 discusses the side-channel attack fundamentals, which covers from theoretic basics to analysis models, and further to platform setup and attack execution. Chapter 2 centers to SCA-resistant strategies against generic power and EM attacks. In this chapter, a major contribution, a compact and secure dual-rail logic style, will be originally proposed. The logic transformation based on bottom-layer design will be presented. Chapter 3 is scheduled to elaborate the implementation challenges of generic dual-rail styles. A customized design flow to solve the implementation problems will be described along with a self-developed automatic implementation toolkit, for mitigating the design barriers and facilitating the processes. Chapter 4 will originally elaborate the tool specifics and construction details. The implementation case studies and security validations for the proposed logic style, as well as a sophisticated routing verification experiment, will be described in Chapter 5. Finally, a summary of thesis conclusions and perspectives for future work are included in Chapter 5. To better exhibit the thesis contents, each chapter is further described next: Chapter 1 provides the introduction of hardware implementation testbed and side-channel attack fundamentals, and mainly contains: (a) The FPGA generic architecture and device features, particularly of Virtex-5 FPGA; (b) The selected crypto algorithm - a commercially and extensively used Advanced Encryption Standard (AES) module - is detailed; (c) The essentials of Side-Channel methods are profiled. It reveals the correlated dissipation leakage to the internal behaviors, and the method to recover this relationship between the physical fluctuations in side-channel traces and the intra processed data; (d) The setups of the power/EM testing platforms enclosed inside the thesis work are given. The content of this thesis is expanded and deepened from chapter 2, which is divided into several aspects. First, the protection principle of dynamic compensation of the generic dual-rail precharge logic is explained by describing the compensated gate-level elements. Second, the novel DPL is originally proposed by detailing the logic protocol and an implementation case study. Third, a couple of custom workflows are shown next for realizing the rail conversion. Meanwhile, the technical definitions that are about to be manipulated above LUT-level netlist are clarified. A brief discussion about the batched process is given in the final part. Chapter 3 studies the implementation challenges of DPLs in FPGAs. The security level of state-of-the-art SCA-resistant solutions are decreased due to the implementation barriers using conventional EDA tools. In the studied FPGA scenario, problems are discussed from dual-rail format, parasitic impact, technological bias and implementation feasibility. According to these elaborations, two problems arise: How to implement the proposed logic without crippling the security level; and How to manipulate a large number of cells and automate the transformation. The proposed PA-DPL in chapter 2 is legalized with a series of initiatives, from structures to implementation methods. Furthermore, a self-adaptive heating system is depicted and implemented to a dual-core logic, assumed to alternatively adjust local temperature for balancing the negative impacts from silicon technological biases on real-time. Chapter 4 centers to the toolkit system. Built upon a third-party Application Program Interface (API) library, the customized toolkit is able to manipulate the logic elements from post P&R circuit (an unreadable binary version of the xdl one) converted to Xilinx xdl format. The mechanism and rationale of the proposed toolkit are carefully convoyed, covering the routing detection and repairing approaches. The developed toolkit aims to achieve very strictly identical routing networks for dual-rail logic both for separate and interleaved placement. This chapter particularly specifies the technical essentials to support the implementations in Xilinx devices and the flexibility to be expanded to other applications. Chapter 5 focuses on the implementation of the case studies for validating the security grades of the proposed logic style from the proposed toolkit. Comprehensive implementation techniques are discussed. (a) The placement impacts using the proposed toolkit are discussed. Different execution schemes, considering the global optimization in security and cost, are verified with experiments so as to find the optimized placement and repair schemes; (b) Security validations are realized with correlation, timing methods; (c) A systematic method is applied to a BCDL structured module to validate the routing impact over security metric; (d) The preliminary results using the self-adaptive heating system over process variation is given; (e) A practical implementation of the proposed toolkit to a large design is introduced. Chapter 6 includes the general summary of the complete work presented inside this thesis. Finally, a brief perspective for the future work is drawn which might expand the potential utilization of the thesis contributions to a wider range of implementation domains beyond cryptography on FPGAs.