879 resultados para intrinsic and extrinsic InP
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This paper deals with psychological aspects involving bodybuilders nowadays, through their interests, motivations, factors that favor them to practice physical activity, frequent body image disorders, and major benefits that can be acquired when weight training the right way, whether physical or psychological. During the work is addressed to the Theory of Self-Determination and its characteristics, and behavioral aspects of the individual. We explore the types of motivation, intrinsic and extrinsic motivation, and their regulatory mechanisms. Did the relationship with weight training and the environment that surrounds it as the interaction between those involved in practice between the student and teacher performance and results and the effects that this interference can have on motivation and individual behavior. Also we relate body image factors, the perception of bodybuilders in relation to his own body, and the possible disorders that can promote. Describe the Vigorexia and their characteristics, we made a list of body image and self-esteem, approach standards set by society, the aesthetic influence and media disclosure. Finally, we explore the key benefits that well-directed physical activity can provide its practitioners. Physical, mental and social beneficial, improved self-esteem and in one's relationship skills. We address the growing strength in recent years, and their improvements in the various age groups and the population's quality of life
Resumo:
„Wer studierte was wann und warum?“ Diese Formulierung impliziert die Fragestellung und die Themenbereiche der Arbeit, die einen Beitrag zur Diskussion von Bildungsentscheidungen auf gesellschaftlicher, organisationaler und individueller Ebene leistet. Ausgangspunkt der Analyse ist eine ausführliche theoretische Einbettung des Themas anhand verschiedener Konzepte und der Aufarbeitung des Forschungsstandes. Dabei werden sozialstrukturelle Merkmale, die Bedeutung von Lebensorientierungen und der Komplex der individuellen Motivationslagen diskutiert und u.a. in Bezug zur handlungstheoretischen Unterscheidung der Um-zu- und Weil-Motive von Alfred Schütz gesetzt. Dieses Konzept und die daraus resultierenden Hypothesen werden in einer quantitativ-empirischen Analyse untersucht. Datengrundlage ist das Studierendensurvey der AG Hochschulforschung der Uni Konstanz. Anhand von binären logistischen Regressionsanalysen werden bestimmte Einflussstrukturen und fachspezifische Profile ermittelt. Insbesondere die Konzeption der intrinsischen und extrinsischen Motivationen zeichnet dabei deutliche Unterscheidungen zwischen den Fächern. Auch in der Betrachtung des Zeitraumes 1985-2007 werden Veränderungen der Einflussstrukturen der Studienfachwahl deutlich, wie z.B. die schwindende Bedeutung der sozialen Herkunft für die Studienfachwahl zeigt. Abschließend wird der Zusammenhang der Einflussstrukturen der Studienfachwahl mit der Studienzufriedenheit analysiert. Auch für die Zufriedenheit von Studierenden und damit den Studienerfolg sind bestimmte Strukturen der Studienfachwahl von Bedeutung.
Resumo:
La distrofia muscolare di Emery-Dreifuss (EDMD) è una miopatia degenerativa ereditaria caratterizzata da debolezza e atrofia dei muscoli senza coinvolgimento del sistema nervoso. Individui EDMD presentano, inoltre, cardiomiopatia con difetto di conduzione che provoca rischio di morte improvvisa. Diversi studi evidenziano un coinvolgimento di citochine in diverse distrofie muscolari causanti infiammazione cronica, riassorbimento osseo, necrosi cellulare. Abbiamo effettuato una valutazione simultanea della concentrazione di citochine, chemochine, fattori di crescita, presenti nel siero di un gruppo di 25 pazienti EDMD. L’analisi effettuata ha evidenziato un aumento di citochine quali IL-17, TGFβ2, INF-γ e del TGFβ1. Inoltre, una riduzione del fattore di crescita VEGF e della chemochina RANTES è stata rilevata nel siero dei pazienti EDMD rispetto ai pazienti controllo. Ulteriori analisi effettuate tramite saggio ELISA hanno evidenziato un aumento dei livelli di TGFβ2 e IL-6 nel terreno di coltura di fibroblasti EDMD2. Per testare l’effetto nei muscoli, di citochine alterate, abbiamo utilizzato terreno condizionante di fibroblasti EDMD per differenziare mioblasti murini C2C12. Una riduzione del grado di differenziamento è stata osservata nei mioblasti condizionati con terreno EDMD. Trattando queste cellule con anticorpi neutralizzanti contro TGFβ2 e IL-6 si è avuto un miglioramento del grado di differenziamento. In C2C12 che esprimevano la mutazione H222P del gene Lmna,non sono state osservate alterazioni di citochine e benefici di anticorpi neutralizzanti. I dati mostrano un effetto patogenetico delle citochine alterate come osservato in fibroblasti e siero di pazienti, suggerendo un effetto sul tessuto fibrotico di muscoli EDMD. Un effetto intrinseco alla mutazione della lamina A è stato rilevato sul espressione di caveolina 3 in mioblasti differenziati EDMD. I risultati si aggiungono a dati forniti sulla patogenesi dell' EDMD confermando che fattori intrinseci ed estrinseci contribuiscono alla malattia. Utilizzo di anticorpi neutralizzanti specifici contro fattori estrinseci potrebbe rappresentare un approccio terapeutico come mostrato in questo studio.
Resumo:
Objective: To review the literature to identify and synthesize the evidence on risk factors for patient falls in geriatric rehabilitation hospital settings. Data sources: Eligible studies were systematically searched on 16 databases from inception to December 2010. Review methods: The search strategies used a combination of terms for rehabilitation hospital patients, falls, risk factors and older adults. Cross-sectional, cohort, case-control studies and randomized clinical trials (RCTs) published in English that investigated risks for falls among patients ≥65 years of age in rehabilitation hospital settings were included. Studies that investigated fall risk assessment tools, but did not investigate risk factors themselves or did not report a measure of risk (e.g. odds ratio, relative risk) were excluded. Results: A total of 2,824 references were identified; only eight articles concerning six studies met the inclusion criteria. In these, 1,924 geriatric rehabilitation patients were followed. The average age of the patients ranged from 77 to 83 years, the percentage of women ranged from 56% to 81%, and the percentage of fallers ranged from 15% to 54%. Two were case-control studies, two were RCTs and four were prospective cohort studies. Several intrinsic and extrinsic risk factors for falls were identified. Conclusion: Carpet flooring, vertigo, being an amputee, confusion, cognitive impairment, stroke, sleep disturbance, anticonvulsants, tranquilizers and antihypertensive medications, age between 71 and 80, previous falls, and need for transfer assistance are risk factors for geriatric patient falls in rehabilitation hospital settings.
Resumo:
During school-to-work transition, adolescents develop values and prioritize what is im-portant in their life. Values are concepts or beliefs about desirable states or behaviors that guide the selection or evaluation of behavior and events, and are ordered by their relative importance (Schwartz & Bilsky, 1987). Stressing the important role of values, career re-search has intensively studied the effect of values on educational decisions and early career development (e.g. Eccles, 2005; Hirschi, 2010; Rimann, Udris, & Weiss, 2000). Few re-searchers, however, have investigated so far how values develop in the early career phase and how value trajectories are influenced by individual characteristics. Values can be oriented towards specific life domains, such as work or family. Work values include intrinsic and extrinsic aspects of work (e.g., self-development, cooperation with others, income) (George & Jones, 1997). Family values include the importance of partner-ship, the creation of an own family and having children (Mayer, Kuramschew, & Trommsdroff, 2009). Research indicates that work values change considerably during early career development (Johnson, 2001; Lindsay & Knox, 1984). Individual differences in work values and value trajectories are found e.g., in relation to gender (Duffy & Sedlacek, 2007), parental background (Loughlin & Barling, 2001), personality (Lowry et al., 2012), educa-tion (Battle, 2003), and the anticipated timing of school-to-work transition (Porfeli, 2007). In contrast to work values, research on family value trajectories is rare and knowledge about the development during the school-to-work transition and early career development is lack-ing. This paper aims at filling this research gap. Focusing on family values and intrinsic work values and we expect a) family and work val-ues to change between ages 16 and 25, and b) that initial levels of family and work values as well as value change to be predicted by gender, reading literacy, ambition, and expected du-ration of education. Method. Using data from 2620 young adults (59.5% females), who participated in the Swiss longitudinal study TREE, latent growth modeling was employed to estimate the initial level and growth rate per year for work and family values. Analyses are based on TREE-waves 1 (year 2001, first year after compulsory school) to 8 (year 2010). Variables in the models included family values and intrinsic work values, gender, reading literacy, ambition and ex-pected duration of education. Language region was included as control variable. Results. Family values did not change significantly over the first four years after leaving compulsory school (mean slope = -.03, p =.36). They increased, however, significantly five years after compulsory school (mean slope = .13, p >.001). Intercept (.23, p < .001), first slope (.02, p < .001), and second slope (.01, p < .001) showed significant variance. Initial levels were higher for men and those with higher ambitions. Increases were found to be steeper for males as well as for participants with lower educational duration expectations and reading skills. Intrinsic work values increased over the first four years (mean slope =.03, p <.05) and showed a tendency to decrease in the years five to ten (mean slope = -.01, p < .10). Intercept (.21, p < .001), first slope (.01, p < .001), and second slope (.01, p < .001) showed signifi-cant variance, meaning that there are individual differences in initial levels and growth rates. Initial levels were higher for females, and those with higher ambitions, expecting longer educational pathways, and having lower reading skills. Growth rates were lower for the first phase and steeper for the second phase for males compared to females. Discussion. In general, results showed different patterns of work and family value trajecto-ries, and different individual factors related to initial levels and development after compul-sory school. Developments seem to fit to major life and career roles: in the first years after compulsory school young adults may be engaged to become established in one's job; later on, raising a family becomes more important. That we found significant gender differences in work and family trajectories may reflect attempts to overcome traditional roles, as over-all, women increase in work values and men increase in family values, resulting in an over-all trend to converge.
Resumo:
Objective: A number of intrinsic and extrinsic risk factors for the rupture of intracranial aneurysms have been identified. Still, the cause precipitating aneurysm rupture remains unknown in many cases. In addition, it has been observed that aneurysm ruptures are clustered in time but the trigger mechanism remains obscure. As solar activity has been associated with cardiovascular mortality and morbidity we decided to study ist association to aneurysm rupture in the Swiss population. Method: Patient data was extracted from the Swiss SOS database, at time of analysis covering 918 patients with angiography-proven aSAH treated at seven Swiss neurovascular centers between 01/01/2009 – 12/31/2011. The number of aneurysm rupture per day, week, month (Daily/Weekly/Monthly Rupture Frequency = RF) was measured and correlated to the absolute amount and the change in various parameters of interest representing continuous measurements of solar activity (radioflux (F10.7 index), solar proton flux, solar flare occurrence, planetary K-index/planetary A-index) using Poisson regression analysis. Results: Of a consecutive series of 918 cases of SAH, precise determination of the date of symptom onset was possible in 816 (88.9%). During the period of interest there were 517 days without recorded aneurysm rupture. There were 398, 139, 27 and 12 days with 1, 2, 3, and 4 ruptures per day. Five or 6 ruptures were only noted on a single day each. Poisson regression analysis demonstrated a significant correlation of F10.7 index and aneurysm rupture (incidence rate ratio (IRR) = 1.006303; standard error (SE) 0.0013201; 95% confidence interval (CI) 1.003719 – 1.008894; p<0.001), according to which every 1-unit increase of the F10.7 index increased the count for an aneurysm to rupture by 0.63%. As the F10.7 index is known to correlate well with the Space Environment Services Center (SESC) sunspot number, we performed additional analyses on SESC sunspot number and sunspot area. Here, a likewise statistically significant relationship of both the SESC sunspot number (IRR 1.003413; SE 0.0007913; 95%CI 1.001864 – 1.004965; p<0.001) and the sunspot area (IRR 1.000419; SE 0.0000866; 95%CI 1.000249 – 1.000589; p<0.001) emerged. All other variables analyzed showed no correlation with RF. Conclusions: Using valid methods, we found higher radioflux, sunspot number and sunspot area to be associated with an increased count of aneurysm rupture. Since we were using rupture frequencies rather than incidences and because we cannot explain the physiological basis of this statistical association, the clinical meaningfulness of this statistical association must be interpreted carefully. Future studies are warranted to rule out a type-1 error.
Resumo:
The endemic cichlid fishes of Lakes Malawi, Tanganyika and Victoria are textbook examples of explosive speciation and adaptive radiation, and their study promises to yield important insights into these processes. Accurate estimates of species richness of lineages in these lakes, and elsewhere, will be a necessary prerequisite for a thorough comparative analysis of the intrinsic and extrinsic factors influencing rates of diversification. This review presents recent findings on the discoveries of new species and species flocks and critically appraises the relevant evidence on species richness from recent studies of polymorphism and assortative mating, generally using behavioural and molecular methods. Within the haplochromines, the most species-rich lineage, there are few reported cases of postzygotic isolation, and these are generally among allopatric taxa that are likely to have diverged a relatively long time in the past. However, many taxa, including many which occur sympatrically and do not interbreed in nature, produce viable, fertile hybrids. Prezygotic barriers are more important, and persist in laboratory conditions in which environmental factors have been controlled, indicating the primary importance of direct mate preferences. Studies to date indicate that estimates of alpha (within-site) diversity appear to be robust. Although within-species colour polymorphisms are common, these have been taken into account in previous estimates of species richness. However, overall estimates of species richness in Lakes Malawi and Victoria are heavily dependent on the assignation of species status to allopatric populations differing in male colour. Appropriate methods for testing the specific status of allopatric cichlid taxa are reviewed and preliminary results presented.
Resumo:
Minimal residual disease (MRD) is a major hurdle in the eradication of malignant tumors. Despite the high sensitivity of various cancers to treatment, some residual cancer cells persist and lead to tumor recurrence and treatment failure. Obvious reasons for residual disease include mechanisms of secondary therapy resistance, such as the presence of mutant cells that are insensitive to the drugs, or the presence of cells that become drug resistant due to activation of survival pathways. In addition to such unambiguous resistance modalities, several patients with relapsing tumors do not show refractory disease and respond again when the initial therapy is repeated. These cases cannot be explained by the selection of mutant tumor cells, and the precise mechanisms underlying this clinical drug resistance are ill-defined. In the current review, we put special emphasis on cell-intrinsic and -extrinsic mechanisms that may explain mechanisms of MRD that are independent of secondary therapy resistance. In particular, we show that studying genetically engineered mouse models (GEMMs), which highly resemble the disease in humans, provides a complementary approach to understand MRD. In these animal models, specific mechanisms of secondary resistance can be excluded by targeted genetic modifications. This allows a clear distinction between the selection of cells with stable secondary resistance and mechanisms that result in the survival of residual cells but do not provoke secondary drug resistance. Mechanisms that may explain the latter feature include special biochemical defense properties of cancer stem cells, metabolic peculiarities such as the dependence on autophagy, drug-tolerant persisting cells, intratumoral heterogeneity, secreted factors from the microenvironment, tumor vascularization patterns and immunosurveillance-related factors. We propose in the current review that a common feature of these various mechanisms is cancer cell dormancy. Therefore, dormant cancer cells appear to be an important target in the attempt to eradicate residual cancer cells, and eventually cure patients who repeatedly respond to anticancer therapy but lack complete tumor eradication.
Resumo:
Esta tesis doctoral se centra principalmente en técnicas de ataque y contramedidas relacionadas con ataques de canal lateral (SCA por sus siglas en inglés), que han sido propuestas dentro del campo de investigación académica desde hace 17 años. Las investigaciones relacionadas han experimentado un notable crecimiento en las últimas décadas, mientras que los diseños enfocados en la protección sólida y eficaz contra dichos ataques aún se mantienen como un tema de investigación abierto, en el que se necesitan iniciativas más confiables para la protección de la información persona de empresa y de datos nacionales. El primer uso documentado de codificación secreta se remonta a alrededor de 1700 B.C., cuando los jeroglíficos del antiguo Egipto eran descritos en las inscripciones. La seguridad de la información siempre ha supuesto un factor clave en la transmisión de datos relacionados con inteligencia diplomática o militar. Debido a la evolución rápida de las técnicas modernas de comunicación, soluciones de cifrado se incorporaron por primera vez para garantizar la seguridad, integridad y confidencialidad de los contextos de transmisión a través de cables sin seguridad o medios inalámbricos. Debido a las restricciones de potencia de cálculo antes de la era del ordenador, la técnica de cifrado simple era un método más que suficiente para ocultar la información. Sin embargo, algunas vulnerabilidades algorítmicas pueden ser explotadas para restaurar la regla de codificación sin mucho esfuerzo. Esto ha motivado nuevas investigaciones en el área de la criptografía, con el fin de proteger el sistema de información ante sofisticados algoritmos. Con la invención de los ordenadores se ha acelerado en gran medida la implementación de criptografía segura, que ofrece resistencia eficiente encaminada a obtener mayores capacidades de computación altamente reforzadas. Igualmente, sofisticados cripto-análisis han impulsado las tecnologías de computación. Hoy en día, el mundo de la información ha estado involucrado con el campo de la criptografía, enfocada a proteger cualquier campo a través de diversas soluciones de cifrado. Estos enfoques se han fortalecido debido a la unificación optimizada de teorías matemáticas modernas y prácticas eficaces de hardware, siendo posible su implementación en varias plataformas (microprocesador, ASIC, FPGA, etc.). Las necesidades y requisitos de seguridad en la industria son las principales métricas de conducción en el diseño electrónico, con el objetivo de promover la fabricación de productos de gran alcance sin sacrificar la seguridad de los clientes. Sin embargo, una vulnerabilidad en la implementación práctica encontrada por el Prof. Paul Kocher, et al en 1996 implica que un circuito digital es inherentemente vulnerable a un ataque no convencional, lo cual fue nombrado posteriormente como ataque de canal lateral, debido a su fuente de análisis. Sin embargo, algunas críticas sobre los algoritmos criptográficos teóricamente seguros surgieron casi inmediatamente después de este descubrimiento. En este sentido, los circuitos digitales consisten típicamente en un gran número de celdas lógicas fundamentales (como MOS - Metal Oxide Semiconductor), construido sobre un sustrato de silicio durante la fabricación. La lógica de los circuitos se realiza en función de las innumerables conmutaciones de estas células. Este mecanismo provoca inevitablemente cierta emanación física especial que puede ser medida y correlacionada con el comportamiento interno del circuito. SCA se puede utilizar para revelar datos confidenciales (por ejemplo, la criptografía de claves), analizar la arquitectura lógica, el tiempo e incluso inyectar fallos malintencionados a los circuitos que se implementan en sistemas embebidos, como FPGAs, ASICs, o tarjetas inteligentes. Mediante el uso de la comparación de correlación entre la cantidad de fuga estimada y las fugas medidas de forma real, información confidencial puede ser reconstruida en mucho menos tiempo y computación. Para ser precisos, SCA básicamente cubre una amplia gama de tipos de ataques, como los análisis de consumo de energía y radiación ElectroMagnética (EM). Ambos se basan en análisis estadístico y, por lo tanto, requieren numerosas muestras. Los algoritmos de cifrado no están intrínsecamente preparados para ser resistentes ante SCA. Es por ello que se hace necesario durante la implementación de circuitos integrar medidas que permitan camuflar las fugas a través de "canales laterales". Las medidas contra SCA están evolucionando junto con el desarrollo de nuevas técnicas de ataque, así como la continua mejora de los dispositivos electrónicos. Las características físicas requieren contramedidas sobre la capa física, que generalmente se pueden clasificar en soluciones intrínsecas y extrínsecas. Contramedidas extrínsecas se ejecutan para confundir la fuente de ataque mediante la integración de ruido o mala alineación de la actividad interna. Comparativamente, las contramedidas intrínsecas están integradas en el propio algoritmo, para modificar la aplicación con el fin de minimizar las fugas medibles, o incluso hacer que dichas fugas no puedan ser medibles. Ocultación y Enmascaramiento son dos técnicas típicas incluidas en esta categoría. Concretamente, el enmascaramiento se aplica a nivel algorítmico, para alterar los datos intermedios sensibles con una máscara de manera reversible. A diferencia del enmascaramiento lineal, las operaciones no lineales que ampliamente existen en criptografías modernas son difíciles de enmascarar. Dicho método de ocultación, que ha sido verificado como una solución efectiva, comprende principalmente la codificación en doble carril, que está ideado especialmente para aplanar o eliminar la fuga dependiente de dato en potencia o en EM. En esta tesis doctoral, además de la descripción de las metodologías de ataque, se han dedicado grandes esfuerzos sobre la estructura del prototipo de la lógica propuesta, con el fin de realizar investigaciones enfocadas a la seguridad sobre contramedidas de arquitectura a nivel lógico. Una característica de SCA reside en el formato de las fuentes de fugas. Un típico ataque de canal lateral se refiere al análisis basado en la potencia, donde la capacidad fundamental del transistor MOS y otras capacidades parásitas son las fuentes esenciales de fugas. Por lo tanto, una lógica robusta resistente a SCA debe eliminar o mitigar las fugas de estas micro-unidades, como las puertas lógicas básicas, los puertos I/O y las rutas. Las herramientas EDA proporcionadas por los vendedores manipulan la lógica desde un nivel más alto, en lugar de realizarlo desde el nivel de puerta, donde las fugas de canal lateral se manifiestan. Por lo tanto, las implementaciones clásicas apenas satisfacen estas necesidades e inevitablemente atrofian el prototipo. Por todo ello, la implementación de un esquema de diseño personalizado y flexible ha de ser tomado en cuenta. En esta tesis se presenta el diseño y la implementación de una lógica innovadora para contrarrestar SCA, en la que se abordan 3 aspectos fundamentales: I. Se basa en ocultar la estrategia sobre el circuito en doble carril a nivel de puerta para obtener dinámicamente el equilibrio de las fugas en las capas inferiores; II. Esta lógica explota las características de la arquitectura de las FPGAs, para reducir al mínimo el gasto de recursos en la implementación; III. Se apoya en un conjunto de herramientas asistentes personalizadas, incorporadas al flujo genérico de diseño sobre FPGAs, con el fin de manipular los circuitos de forma automática. El kit de herramientas de diseño automático es compatible con la lógica de doble carril propuesta, para facilitar la aplicación práctica sobre la familia de FPGA del fabricante Xilinx. En este sentido, la metodología y las herramientas son flexibles para ser extendido a una amplia gama de aplicaciones en las que se desean obtener restricciones mucho más rígidas y sofisticadas a nivel de puerta o rutado. En esta tesis se realiza un gran esfuerzo para facilitar el proceso de implementación y reparación de lógica de doble carril genérica. La viabilidad de las soluciones propuestas es validada mediante la selección de algoritmos criptográficos ampliamente utilizados, y su evaluación exhaustiva en comparación con soluciones anteriores. Todas las propuestas están respaldadas eficazmente a través de ataques experimentales con el fin de validar las ventajas de seguridad del sistema. El presente trabajo de investigación tiene la intención de cerrar la brecha entre las barreras de implementación y la aplicación efectiva de lógica de doble carril. En esencia, a lo largo de esta tesis se describirá un conjunto de herramientas de implementación para FPGAs que se han desarrollado para trabajar junto con el flujo de diseño genérico de las mismas, con el fin de lograr crear de forma innovadora la lógica de doble carril. Un nuevo enfoque en el ámbito de la seguridad en el cifrado se propone para obtener personalización, automatización y flexibilidad en el prototipo de circuito de bajo nivel con granularidad fina. Las principales contribuciones del presente trabajo de investigación se resumen brevemente a continuación: Lógica de Precharge Absorbed-DPL logic: El uso de la conversión de netlist para reservar LUTs libres para ejecutar la señal de precharge y Ex en una lógica DPL. Posicionamiento entrelazado Row-crossed con pares idénticos de rutado en redes de doble carril, lo que ayuda a aumentar la resistencia frente a la medición EM selectiva y mitigar los impactos de las variaciones de proceso. Ejecución personalizada y herramientas de conversión automática para la generación de redes idénticas para la lógica de doble carril propuesta. (a) Para detectar y reparar conflictos en las conexiones; (b) Detectar y reparar las rutas asimétricas. (c) Para ser utilizado en otras lógicas donde se requiere un control estricto de las interconexiones en aplicaciones basadas en Xilinx. Plataforma CPA de pruebas personalizadas para el análisis de EM y potencia, incluyendo la construcción de dicha plataforma, el método de medición y análisis de los ataques. Análisis de tiempos para cuantificar los niveles de seguridad. División de Seguridad en la conversión parcial de un sistema de cifrado complejo para reducir los costes de la protección. Prueba de concepto de un sistema de calefacción auto-adaptativo para mitigar los impactos eléctricos debido a la variación del proceso de silicio de manera dinámica. La presente tesis doctoral se encuentra organizada tal y como se detalla a continuación: En el capítulo 1 se abordan los fundamentos de los ataques de canal lateral, que abarca desde conceptos básicos de teoría de modelos de análisis, además de la implementación de la plataforma y la ejecución de los ataques. En el capítulo 2 se incluyen las estrategias de resistencia SCA contra los ataques de potencia diferencial y de EM. Además de ello, en este capítulo se propone una lógica en doble carril compacta y segura como contribución de gran relevancia, así como también se presentará la transformación lógica basada en un diseño a nivel de puerta. Por otra parte, en el Capítulo 3 se abordan los desafíos relacionados con la implementación de lógica en doble carril genérica. Así mismo, se describirá un flujo de diseño personalizado para resolver los problemas de aplicación junto con una herramienta de desarrollo automático de aplicaciones propuesta, para mitigar las barreras de diseño y facilitar los procesos. En el capítulo 4 se describe de forma detallada la elaboración e implementación de las herramientas propuestas. Por otra parte, la verificación y validaciones de seguridad de la lógica propuesta, así como un sofisticado experimento de verificación de la seguridad del rutado, se describen en el capítulo 5. Por último, un resumen de las conclusiones de la tesis y las perspectivas como líneas futuras se incluyen en el capítulo 6. Con el fin de profundizar en el contenido de la tesis doctoral, cada capítulo se describe de forma más detallada a continuación: En el capítulo 1 se introduce plataforma de implementación hardware además las teorías básicas de ataque de canal lateral, y contiene principalmente: (a) La arquitectura genérica y las características de la FPGA a utilizar, en particular la Xilinx Virtex-5; (b) El algoritmo de cifrado seleccionado (un módulo comercial Advanced Encryption Standard (AES)); (c) Los elementos esenciales de los métodos de canal lateral, que permiten revelar las fugas de disipación correlacionadas con los comportamientos internos; y el método para recuperar esta relación entre las fluctuaciones físicas en los rastros de canal lateral y los datos internos procesados; (d) Las configuraciones de las plataformas de pruebas de potencia / EM abarcadas dentro de la presente tesis. El contenido de esta tesis se amplia y profundiza a partir del capítulo 2, en el cual se abordan varios aspectos claves. En primer lugar, el principio de protección de la compensación dinámica de la lógica genérica de precarga de doble carril (Dual-rail Precharge Logic-DPL) se explica mediante la descripción de los elementos compensados a nivel de puerta. En segundo lugar, la lógica PA-DPL es propuesta como aportación original, detallando el protocolo de la lógica y un caso de aplicación. En tercer lugar, dos flujos de diseño personalizados se muestran para realizar la conversión de doble carril. Junto con ello, se aclaran las definiciones técnicas relacionadas con la manipulación por encima de la netlist a nivel de LUT. Finalmente, una breve discusión sobre el proceso global se aborda en la parte final del capítulo. El Capítulo 3 estudia los principales retos durante la implementación de DPLs en FPGAs. El nivel de seguridad de las soluciones de resistencia a SCA encontradas en el estado del arte se ha degenerado debido a las barreras de implantación a través de herramientas EDA convencionales. En el escenario de la arquitectura FPGA estudiada, se discuten los problemas de los formatos de doble carril, impactos parásitos, sesgo tecnológico y la viabilidad de implementación. De acuerdo con estas elaboraciones, se plantean dos problemas: Cómo implementar la lógica propuesta sin penalizar los niveles de seguridad, y cómo manipular un gran número de celdas y automatizar el proceso. El PA-DPL propuesto en el capítulo 2 se valida con una serie de iniciativas, desde características estructurales como doble carril entrelazado o redes de rutado clonadas, hasta los métodos de aplicación tales como las herramientas de personalización y automatización de EDA. Por otra parte, un sistema de calefacción auto-adaptativo es representado y aplicado a una lógica de doble núcleo, con el fin de ajustar alternativamente la temperatura local para equilibrar los impactos negativos de la variación del proceso durante la operación en tiempo real. El capítulo 4 se centra en los detalles de la implementación del kit de herramientas. Desarrollado sobre una API third-party, el kit de herramientas personalizado es capaz de manipular los elementos de la lógica de circuito post P&R ncd (una versión binaria ilegible del xdl) convertido al formato XDL Xilinx. El mecanismo y razón de ser del conjunto de instrumentos propuestos son cuidadosamente descritos, que cubre la detección de enrutamiento y los enfoques para la reparación. El conjunto de herramientas desarrollado tiene como objetivo lograr redes de enrutamiento estrictamente idénticos para la lógica de doble carril, tanto para posicionamiento separado como para el entrelazado. Este capítulo particularmente especifica las bases técnicas para apoyar las implementaciones en los dispositivos de Xilinx y su flexibilidad para ser utilizado sobre otras aplicaciones. El capítulo 5 se enfoca en la aplicación de los casos de estudio para la validación de los grados de seguridad de la lógica propuesta. Se discuten los problemas técnicos detallados durante la ejecución y algunas nuevas técnicas de implementación. (a) Se discute el impacto en el proceso de posicionamiento de la lógica utilizando el kit de herramientas propuesto. Diferentes esquemas de implementación, tomando en cuenta la optimización global en seguridad y coste, se verifican con los experimentos con el fin de encontrar los planes de posicionamiento y reparación optimizados; (b) las validaciones de seguridad se realizan con los métodos de correlación y análisis de tiempo; (c) Una táctica asintótica se aplica a un núcleo AES sobre BCDL estructurado para validar de forma sofisticada el impacto de enrutamiento sobre métricas de seguridad; (d) Los resultados preliminares utilizando el sistema de calefacción auto-adaptativa sobre la variación del proceso son mostrados; (e) Se introduce una aplicación práctica de las herramientas para un diseño de cifrado completa. Capítulo 6 incluye el resumen general del trabajo presentado dentro de esta tesis doctoral. Por último, una breve perspectiva del trabajo futuro se expone, lo que puede ampliar el potencial de utilización de las contribuciones de esta tesis a un alcance más allá de los dominios de la criptografía en FPGAs. ABSTRACT This PhD thesis mainly concentrates on countermeasure techniques related to the Side Channel Attack (SCA), which has been put forward to academic exploitations since 17 years ago. The related research has seen a remarkable growth in the past decades, while the design of solid and efficient protection still curiously remain as an open research topic where more reliable initiatives are required for personal information privacy, enterprise and national data protections. The earliest documented usage of secret code can be traced back to around 1700 B.C., when the hieroglyphs in ancient Egypt are scribed in inscriptions. Information security always gained serious attention from diplomatic or military intelligence transmission. Due to the rapid evolvement of modern communication technique, crypto solution was first incorporated by electronic signal to ensure the confidentiality, integrity, availability, authenticity and non-repudiation of the transmitted contexts over unsecure cable or wireless channels. Restricted to the computation power before computer era, simple encryption tricks were practically sufficient to conceal information. However, algorithmic vulnerabilities can be excavated to restore the encoding rules with affordable efforts. This fact motivated the development of modern cryptography, aiming at guarding information system by complex and advanced algorithms. The appearance of computers has greatly pushed forward the invention of robust cryptographies, which efficiently offers resistance relying on highly strengthened computing capabilities. Likewise, advanced cryptanalysis has greatly driven the computing technologies in turn. Nowadays, the information world has been involved into a crypto world, protecting any fields by pervasive crypto solutions. These approaches are strong because of the optimized mergence between modern mathematical theories and effective hardware practices, being capable of implement crypto theories into various platforms (microprocessor, ASIC, FPGA, etc). Security needs from industries are actually the major driving metrics in electronic design, aiming at promoting the construction of systems with high performance without sacrificing security. Yet a vulnerability in practical implementation found by Prof. Paul Kocher, et al in 1996 implies that modern digital circuits are inherently vulnerable to an unconventional attack approach, which was named as side-channel attack since then from its analysis source. Critical suspicions to theoretically sound modern crypto algorithms surfaced almost immediately after this discovery. To be specifically, digital circuits typically consist of a great number of essential logic elements (as MOS - Metal Oxide Semiconductor), built upon a silicon substrate during the fabrication. Circuit logic is realized relying on the countless switch actions of these cells. This mechanism inevitably results in featured physical emanation that can be properly measured and correlated with internal circuit behaviors. SCAs can be used to reveal the confidential data (e.g. crypto-key), analyze the logic architecture, timing and even inject malicious faults to the circuits that are implemented in hardware system, like FPGA, ASIC, smart Card. Using various comparison solutions between the predicted leakage quantity and the measured leakage, secrets can be reconstructed at much less expense of time and computation. To be precisely, SCA basically encloses a wide range of attack types, typically as the analyses of power consumption or electromagnetic (EM) radiation. Both of them rely on statistical analyses, and hence require a number of samples. The crypto algorithms are not intrinsically fortified with SCA-resistance. Because of the severity, much attention has to be taken into the implementation so as to assemble countermeasures to camouflage the leakages via "side channels". Countermeasures against SCA are evolving along with the development of attack techniques. The physical characteristics requires countermeasures over physical layer, which can be generally classified into intrinsic and extrinsic vectors. Extrinsic countermeasures are executed to confuse the attacker by integrating noise, misalignment to the intra activities. Comparatively, intrinsic countermeasures are built into the algorithm itself, to modify the implementation for minimizing the measurable leakage, or making them not sensitive any more. Hiding and Masking are two typical techniques in this category. Concretely, masking applies to the algorithmic level, to alter the sensitive intermediate values with a mask in reversible ways. Unlike the linear masking, non-linear operations that widely exist in modern cryptographies are difficult to be masked. Approved to be an effective counter solution, hiding method mainly mentions dual-rail logic, which is specially devised for flattening or removing the data-dependent leakage in power or EM signatures. In this thesis, apart from the context describing the attack methodologies, efforts have also been dedicated to logic prototype, to mount extensive security investigations to countermeasures on logic-level. A characteristic of SCA resides on the format of leak sources. Typical side-channel attack concerns the power based analysis, where the fundamental capacitance from MOS transistors and other parasitic capacitances are the essential leak sources. Hence, a robust SCA-resistant logic must eliminate or mitigate the leakages from these micro units, such as basic logic gates, I/O ports and routings. The vendor provided EDA tools manipulate the logic from a higher behavioral-level, rather than the lower gate-level where side-channel leakage is generated. So, the classical implementations barely satisfy these needs and inevitably stunt the prototype. In this case, a customized and flexible design scheme is appealing to be devised. This thesis profiles an innovative logic style to counter SCA, which mainly addresses three major aspects: I. The proposed logic is based on the hiding strategy over gate-level dual-rail style to dynamically overbalance side-channel leakage from lower circuit layer; II. This logic exploits architectural features of modern FPGAs, to minimize the implementation expenses; III. It is supported by a set of assistant custom tools, incorporated by the generic FPGA design flow, to have circuit manipulations in an automatic manner. The automatic design toolkit supports the proposed dual-rail logic, facilitating the practical implementation on Xilinx FPGA families. While the methodologies and the tools are flexible to be expanded to a wide range of applications where rigid and sophisticated gate- or routing- constraints are desired. In this thesis a great effort is done to streamline the implementation workflow of generic dual-rail logic. The feasibility of the proposed solutions is validated by selected and widely used crypto algorithm, for thorough and fair evaluation w.r.t. prior solutions. All the proposals are effectively verified by security experiments. The presented research work attempts to solve the implementation troubles. The essence that will be formalized along this thesis is that a customized execution toolkit for modern FPGA systems is developed to work together with the generic FPGA design flow for creating innovative dual-rail logic. A method in crypto security area is constructed to obtain customization, automation and flexibility in low-level circuit prototype with fine-granularity in intractable routings. Main contributions of the presented work are summarized next: Precharge Absorbed-DPL logic: Using the netlist conversion to reserve free LUT inputs to execute the Precharge and Ex signal in a dual-rail logic style. A row-crossed interleaved placement method with identical routing pairs in dual-rail networks, which helps to increase the resistance against selective EM measurement and mitigate the impacts from process variations. Customized execution and automatic transformation tools for producing identical networks for the proposed dual-rail logic. (a) To detect and repair the conflict nets; (b) To detect and repair the asymmetric nets. (c) To be used in other logics where strict network control is required in Xilinx scenario. Customized correlation analysis testbed for EM and power attacks, including the platform construction, measurement method and attack analysis. A timing analysis based method for quantifying the security grades. A methodology of security partitions of complex crypto systems for reducing the protection cost. A proof-of-concept self-adaptive heating system to mitigate electrical impacts over process variations in dynamic dual-rail compensation manner. The thesis chapters are organized as follows: Chapter 1 discusses the side-channel attack fundamentals, which covers from theoretic basics to analysis models, and further to platform setup and attack execution. Chapter 2 centers to SCA-resistant strategies against generic power and EM attacks. In this chapter, a major contribution, a compact and secure dual-rail logic style, will be originally proposed. The logic transformation based on bottom-layer design will be presented. Chapter 3 is scheduled to elaborate the implementation challenges of generic dual-rail styles. A customized design flow to solve the implementation problems will be described along with a self-developed automatic implementation toolkit, for mitigating the design barriers and facilitating the processes. Chapter 4 will originally elaborate the tool specifics and construction details. The implementation case studies and security validations for the proposed logic style, as well as a sophisticated routing verification experiment, will be described in Chapter 5. Finally, a summary of thesis conclusions and perspectives for future work are included in Chapter 5. To better exhibit the thesis contents, each chapter is further described next: Chapter 1 provides the introduction of hardware implementation testbed and side-channel attack fundamentals, and mainly contains: (a) The FPGA generic architecture and device features, particularly of Virtex-5 FPGA; (b) The selected crypto algorithm - a commercially and extensively used Advanced Encryption Standard (AES) module - is detailed; (c) The essentials of Side-Channel methods are profiled. It reveals the correlated dissipation leakage to the internal behaviors, and the method to recover this relationship between the physical fluctuations in side-channel traces and the intra processed data; (d) The setups of the power/EM testing platforms enclosed inside the thesis work are given. The content of this thesis is expanded and deepened from chapter 2, which is divided into several aspects. First, the protection principle of dynamic compensation of the generic dual-rail precharge logic is explained by describing the compensated gate-level elements. Second, the novel DPL is originally proposed by detailing the logic protocol and an implementation case study. Third, a couple of custom workflows are shown next for realizing the rail conversion. Meanwhile, the technical definitions that are about to be manipulated above LUT-level netlist are clarified. A brief discussion about the batched process is given in the final part. Chapter 3 studies the implementation challenges of DPLs in FPGAs. The security level of state-of-the-art SCA-resistant solutions are decreased due to the implementation barriers using conventional EDA tools. In the studied FPGA scenario, problems are discussed from dual-rail format, parasitic impact, technological bias and implementation feasibility. According to these elaborations, two problems arise: How to implement the proposed logic without crippling the security level; and How to manipulate a large number of cells and automate the transformation. The proposed PA-DPL in chapter 2 is legalized with a series of initiatives, from structures to implementation methods. Furthermore, a self-adaptive heating system is depicted and implemented to a dual-core logic, assumed to alternatively adjust local temperature for balancing the negative impacts from silicon technological biases on real-time. Chapter 4 centers to the toolkit system. Built upon a third-party Application Program Interface (API) library, the customized toolkit is able to manipulate the logic elements from post P&R circuit (an unreadable binary version of the xdl one) converted to Xilinx xdl format. The mechanism and rationale of the proposed toolkit are carefully convoyed, covering the routing detection and repairing approaches. The developed toolkit aims to achieve very strictly identical routing networks for dual-rail logic both for separate and interleaved placement. This chapter particularly specifies the technical essentials to support the implementations in Xilinx devices and the flexibility to be expanded to other applications. Chapter 5 focuses on the implementation of the case studies for validating the security grades of the proposed logic style from the proposed toolkit. Comprehensive implementation techniques are discussed. (a) The placement impacts using the proposed toolkit are discussed. Different execution schemes, considering the global optimization in security and cost, are verified with experiments so as to find the optimized placement and repair schemes; (b) Security validations are realized with correlation, timing methods; (c) A systematic method is applied to a BCDL structured module to validate the routing impact over security metric; (d) The preliminary results using the self-adaptive heating system over process variation is given; (e) A practical implementation of the proposed toolkit to a large design is introduced. Chapter 6 includes the general summary of the complete work presented inside this thesis. Finally, a brief perspective for the future work is drawn which might expand the potential utilization of the thesis contributions to a wider range of implementation domains beyond cryptography on FPGAs.
Resumo:
Este estudo visa compreender a relação entre os atributos intrínsecos e extrínsecos do produto calça jeans e o seu preço no varejo, tendo como objetivo específico analisar a influência separadamente dos atributos intrínsecos e extrínsecos sobre os preços. Para tanto, utiliza-se da teoria de atributos proposta por Lancaster (1966) e dos métodos de preços hedônicos propostos por Rosen (1974), por meio dos quais é possível observar a importância dos pacotes de atributos intrínsecos e extrínsecos sobre os preços, bem como observar a constituição de pacotes de atributos para diferentes perfis econômicos de consumidores. Foram analisadas 12 categorias de atributos sendo 5 de atributos intrínsecos e outras 7 de atributos extrínsecos. A coleta de dados foi realizada por meio de observação e a pesquisa foi realizada no período de 01 de julho a 31 de julho de 2015 nos maiores shoppings centers e principais lojas de ruas de São Paulo. A partir dos dados coletados, foram realizadas regressão múltipla e a regressão quantílica. A regressão múltipla apresentou o R2 de 58%, nessa análise os principais atributos que influenciam são: loja premium, venda assistida, origem da loja, tamanho da loja (megaloja), tamanho da loja (ampla), lavagem destroyed, resina, modelo flare, lavagem dirty, localização da loja (rua ou shopping), complementos, modelo skinny insumos elastano e poliéster. A regressão quantílica proporciou a análise para 10% das calças mais caras e para os 10% das calças mais baratas. Para as calças mais caras o R2 é de 45% para as calças mais caras há mais atributos extrínsecos do que atributos do produto interferindo no preço, são eles: os insumos poliéster e elastano, lavagem e resinagem, origem da marca da loja, posicionamento da marca, venda assistida, localização da loja, cartão de crédito - private label (nesse caso influenciando negativamente), e tamanho da loja, todos extrínsecos ao produto, se mostraram relevantes para o processo de precificação das calças jeans mais caras observadas nesse estudo. Já para as calças mais baratas, com R2 de 27%, parece haver um equilíbrio entre o número de variáveis intrínsecas e extrínsecas que interferem no preço das calças jeans mais baratas, pois somente a modelagem (atributo intrínseco) e cartão private label (atributo extrínseco) parecem não interferir na precificação. Concluiu-se que há mais atributos extrínsecos que influenciam o preço da calça jeans no varejo.
Resumo:
Biological wastewater treatment is a complex, multivariate process, in which a number of physical and biological processes occur simultaneously. In this study, principal component analysis (PCA) and parallel factor analysis (PARAFAC) were used to profile and characterise Lagoon 115E, a multistage biological lagoon treatment system at Melbourne Water's Western Treatment Plant (WTP) in Melbourne, Australia. In this study, the objective was to increase our understanding of the multivariate processes taking place in the lagoon. The data used in the study span a 7-year period during which samples were collected as often as weekly from the ponds of Lagoon 115E and subjected to analysis. The resulting database, involving 19 chemical and physical variables, was studied using the multivariate data analysis methods PCA and PARAFAC. With these methods, alterations in the state of the wastewater due to intrinsic and extrinsic factors could be discerned. The methods were effective in illustrating and visually representing the complex purification stages and cyclic changes occurring along the lagoon system. The two methods proved complementary, with each having its own beneficial features. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Sucesso profissional está relacionado à satisfação do indivíduo com a sua carreira em longo prazo. Essa satisfação deriva de aspectos intrínsecos e extrínsecos, referentes a uma dimensão objetiva - aspectos mais visíveis do sucesso na carreira - que inclui: salários, progressão profissional, status e oportunidades de desenvolvimento de carreira, como promoção; e outra subjetiva, que se refere à interpretação pessoal do que seja sucesso, em especial na carreira: satisfação com o trabalho, orgulho, sentimentos de autorrealização, dentre outros. A percepção do sucesso com a carreira pode estar associada a características individuais como, por exemplo, a resiliência, que representa o processo dinâmico de adaptação positiva frente às adversidades. Na literatura, não foram localizados estudos que relacionem ambas as variáveis, isto é, sobre o quanto a resiliência pessoal pode contribuir para a percepção de sucesso na carreira. A fim de investigar essa influência, esta pesquisa tem como objetivo principal identificar se resiliência pessoal de administradores prediz sua percepção de sucesso na carreira. Participaram 137 administradores, formados em diversas instituições, sendo 56,1% do sexo feminino e 43,7% do sexo masculino, com idade média de 33 anos, divididos entre casados ou solteiros (44,5% para ambos). Os dados foram coletados por meio de um questionário sociodemográfico, baseado na Escala de Percepção de Sucesso na Carreira e da Connor-Davidson Resilience Scale (CD-RISC). As respostas compuseram um banco eletrônico de dados e foram analisados por meio do Statistical Package for the Social Sciences (SPSS). Resultados de análises de regressão hierárquica revelaram que resiliência prediz 5,5% da percepção do sucesso na carreira objetiva e 9% da percepção de sucesso na carreira subjetiva. Ao acrescentar a interação entre idade e tempo de trabalho, o poder de predição de ambos os modelos, tanto para sucesso objetivo, quanto para o subjetivo, elevou-se substancialmente, chegando ao dobro. Resiliência contribui para que os participantes percebam sucesso na carreira em ambas as dimensões, objetiva e subjetiva, e a predição é potencializada pela interação entre idade e tempo de trabalho. Os achados deste estudo confirmaram a hipótese levantada. O estudo trouxe contribuições para a área, mas também foram reconhecidas limitações, em função das quais foi proposta uma agenda de pesquisa para estudos futuros.
Resumo:
Neural stem cells (NSC) are a valuable model system for understanding the intrinsic and extrinsic controls for self-renewal and differentiation choice. They also offer a platform for drug screening and neurotoxicity studies, and hold promise for cell replacement therapies for the treatment of neurodegenerative diseases. Fully exploiting the potential of this experimental tool often requires the manipulation of intrinsic cues of interest using transfection methods, to which NSC are relatively resistant. In this paper, we show that mouse and human NSC readily take up polystyrene-based microspheres which can be loaded with a range of chemical or biological cargoes. This uptake can take place in the undifferentiated stage without affecting NSC proliferation and their capacity to give rise to neurons and glia. We demonstrate that ß-galactosidase-loaded microspheres could be efficiently introduced into NSC with no apparent toxic effect, thus providing proof-of-concept for the use of microspheres as an alternative biomolecule delivery system.