971 resultados para preliminary design
Resumo:
Ionospheric interaction experiments using a conductive, fully bare tether are discussed. With an optimal design, requiring 1.15 mm diameter and 7.5 km full length for a collected current of 0.87 A at day conditions, the tether radiates 0.33 watts as Fast Magnetosonic waves and 0.16 watts as Alfven waves. Secondary keV electrons are produced over a 6.5 km length, giving raise to noticeable auroral effects in the D-layer, at low geomagnetic latitudes. A preliminary design of the experiment, to be implemented on either a satellite or a Station, has been carried out. An ejector gives an initial velocity to an end mass, a free spool of tether unwinding from that mass during a first stage of deployment; other phases are monitored through the tether velocity, driving a reel with an unwinding device.
Resumo:
El comercio electrónico ha experimentado un fuerte crecimiento en los últimos años, favorecido especialmente por el aumento de las tasas de penetración de Internet en todo el mundo. Sin embargo, no todos los países están evolucionando de la misma manera, con un espectro que va desde las naciones pioneras en desarrollo de tecnologías de la información y comunicaciones, que cuentan con una elevado porcentaje de internautas y de compradores online, hasta las rezagadas de rápida adopción en las que, pese a contar con una menor penetración de acceso, presentan una alta tasa de internautas compradores. Entre ambos extremos se encuentran países como España que, aunque alcanzó hace años una tasa considerable de penetración de usuarios de Internet, no ha conseguido una buena tasa de transformación de internautas en compradores. Pese a que el comercio electrónico ha experimentado importantes aumentos en los últimos años, sus tasas de crecimiento siguen estando por debajo de países con características socio-económicas similares. Para intentar conocer las razones que afectan a la adopción del comercio por parte de los compradores, la investigación científica del fenómeno ha empleado diferentes enfoques teóricos. De entre todos ellos ha destacado el uso de los modelos de adopción, proveniente de la literatura de adopción de sistemas de información en entornos organizativos. Estos modelos se basan en las percepciones de los compradores para determinar qué factores pueden predecir mejor la intención de compra y, en consecuencia, la conducta real de compra de los usuarios. Pese a que en los últimos años han proliferado los trabajos de investigación que aplican los modelos de adopción al comercio electrónico, casi todos tratan de validar sus hipótesis mediante el análisis de muestras de consumidores tratadas como un único conjunto, y del que se obtienen conclusiones generales. Sin embargo, desde el origen del marketing, y en especial a partir de la segunda mitad del siglo XIX, se considera que existen diferencias en el comportamiento de los consumidores, que pueden ser debidas a características demográficas, sociológicas o psicológicas. Estas diferencias se traducen en necesidades distintas, que sólo podrán ser satisfechas con una oferta adaptada por parte de los vendedores. Además, por contar el comercio electrónico con unas características particulares que lo diferencian del comercio tradicional –especialmente por la falta de contacto físico entre el comprador y el producto– a las diferencias en la adopción para cada consumidor se le añaden las diferencias derivadas del tipo de producto adquirido, que si bien habían sido consideradas en el canal físico, en el comercio electrónico cobran especial relevancia. A la vista de todo ello, el presente trabajo pretende abordar el estudio de los factores determinantes de la intención de compra y la conducta real de compra en comercio electrónico por parte del consumidor final español, teniendo en cuenta el tipo de segmento al que pertenezca dicho comprador y el tipo de producto considerado. Para ello, el trabajo contiene ocho apartados entre los que se encuentran cuatro bloques teóricos y tres bloques empíricos, además de las conclusiones. Estos bloques dan lugar a los siguientes ocho capítulos por orden de aparición en el trabajo: introducción, situación del comercio electrónico, modelos de adopción de tecnología, segmentación en comercio electrónico, diseño previo del trabajo empírico, diseño de la investigación, análisis de los resultados y conclusiones. El capítulo introductorio justifica la relevancia de la investigación, además de fijar los objetivos, la metodología y las fases seguidas para el desarrollo del trabajo. La justificación se complementa con el segundo capítulo, que cuenta con dos elementos principales: en primer lugar se define el concepto de comercio electrónico y se hace una breve retrospectiva desde sus orígenes hasta la situación actual en un contexto global; en segundo lugar, el análisis estudia la evolución del comercio electrónico en España, mostrando su desarrollo y situación presente a partir de sus principales indicadores. Este apartado no sólo permite conocer el contexto de la investigación, sino que además permite contrastar la relevancia de la muestra utilizada en el presente estudio con el perfil español respecto al comercio electrónico. Los capítulos tercero –modelos de adopción de tecnologías– y cuarto –segmentación en comercio electrónico– sientan las bases teóricas necesarias para abordar el estudio. En el capítulo tres se hace una revisión general de la literatura de modelos de adopción de tecnología y, en particular, de los modelos de adopción empleados en el ámbito del comercio electrónico. El resultado de dicha revisión deriva en la construcción de un modelo adaptado basado en los modelos UTAUT (Unified Theory of Acceptance and Use of Technology, Teoría unificada de la aceptación y el uso de la tecnología) y UTAUT2, combinado con dos factores específicos de adopción del comercio electrónico: el riesgo percibido y la confianza percibida. Por su parte, en el capítulo cuatro se revisan las metodologías de segmentación de clientes y productos empleadas en la literatura. De dicha revisión se obtienen un amplio conjunto de variables de las que finalmente se escogen nueve variables de clasificación que se consideran adecuadas tanto por su adaptación al contexto del comercio electrónico como por su adecuación a las características de la muestra empleada para validar el modelo. Las nueve variables se agrupan en tres conjuntos: variables de tipo socio-demográfico –género, edad, nivel de estudios, nivel de ingresos, tamaño de la unidad familiar y estado civil–, de comportamiento de compra – experiencia de compra por Internet y frecuencia de compra por Internet– y de tipo psicográfico –motivaciones de compra por Internet. La segunda parte del capítulo cuatro se dedica a la revisión de los criterios empleados en la literatura para la clasificación de los productos en el contexto del comercio electrónico. De dicha revisión se obtienen quince grupos de variables que pueden tomar un total de treinta y cuatro valores, lo que deriva en un elevado número de combinaciones posibles. Sin embargo, pese a haber sido utilizados en el contexto del comercio electrónico, no en todos los casos se ha comprobado la influencia de dichas variables respecto a la intención de compra o la conducta real de compra por Internet; por este motivo, y con el objetivo de definir una clasificación robusta y abordable de tipos de productos, en el capitulo cinco se lleva a cabo una validación de las variables de clasificación de productos mediante un experimento previo con 207 muestras. Seleccionando sólo aquellas variables objetivas que no dependan de la interpretación personal del consumidores y que determinen grupos significativamente distintos respecto a la intención y conducta de compra de los consumidores, se obtiene un modelo de dos variables que combinadas dan lugar a cuatro tipos de productos: bien digital, bien no digital, servicio digital y servicio no digital. Definidos el modelo de adopción y los criterios de segmentación de consumidores y productos, en el sexto capítulo se desarrolla el modelo completo de investigación formado por un conjunto de hipótesis obtenidas de la revisión de la literatura de los capítulos anteriores, en las que se definen las hipótesis de investigación con respecto a las influencias esperadas de las variables de segmentación sobre las relaciones del modelo de adopción. Este modelo confiere a la investigación un carácter social y de tipo fundamentalmente exploratorio, en el que en muchos casos ni siquiera se han encontrado evidencias empíricas previas que permitan el enunciado de hipótesis sobre la influencia de determinadas variables de segmentación. El capítulo seis contiene además la descripción del instrumento de medida empleado en la investigación, conformado por un total de 125 preguntas y sus correspondientes escalas de medida, así como la descripción de la muestra representativa empleada en la validación del modelo, compuesta por un grupo de 817 personas españolas o residentes en España. El capítulo siete constituye el núcleo del análisis empírico del trabajo de investigación, que se compone de dos elementos fundamentales. Primeramente se describen las técnicas estadísticas aplicadas para el estudio de los datos que, dada la complejidad del análisis, se dividen en tres grupos fundamentales: Método de mínimos cuadrados parciales (PLS, Partial Least Squares): herramienta estadística de análisis multivariante con capacidad de análisis predictivo que se emplea en la determinación de las relaciones estructurales de los modelos propuestos. Análisis multigrupo: conjunto de técnicas que permiten comparar los resultados obtenidos con el método PLS entre dos o más grupos derivados del uso de una o más variables de segmentación. En este caso se emplean cinco métodos de comparación, lo que permite asimismo comparar los rendimientos de cada uno de los métodos. Determinación de segmentos no identificados a priori: en el caso de algunas de las variables de segmentación no existe un criterio de clasificación definido a priori, sino que se obtiene a partir de la aplicación de técnicas estadísticas de clasificación. En este caso se emplean dos técnicas fundamentales: análisis de componentes principales –dado el elevado número de variables empleadas para la clasificación– y análisis clúster –del que se combina una técnica jerárquica que calcula el número óptimo de segmentos, con una técnica por etapas que es más eficiente en la clasificación, pero exige conocer el número de clústeres a priori. La aplicación de dichas técnicas estadísticas sobre los modelos resultantes de considerar los distintos criterios de segmentación, tanto de clientes como de productos, da lugar al análisis de un total de 128 modelos de adopción de comercio electrónico y 65 comparaciones multigrupo, cuyos resultados y principales consideraciones son elaboradas a lo largo del capítulo. Para concluir, el capítulo ocho recoge las conclusiones del trabajo divididas en cuatro partes diferenciadas. En primer lugar se examina el grado de alcance de los objetivos planteados al inicio de la investigación; después se desarrollan las principales contribuciones que este trabajo aporta tanto desde el punto de vista metodológico, como desde los punto de vista teórico y práctico; en tercer lugar, se profundiza en las conclusiones derivadas del estudio empírico, que se clasifican según los criterios de segmentación empleados, y que combinan resultados confirmatorios y exploratorios; por último, el trabajo recopila las principales limitaciones de la investigación, tanto de carácter teórico como empírico, así como aquellos aspectos que no habiendo podido plantearse dentro del contexto de este estudio, o como consecuencia de los resultados alcanzados, se presentan como líneas futuras de investigación. ABSTRACT Favoured by an increase of Internet penetration rates across the globe, electronic commerce has experienced a rapid growth over the last few years. Nevertheless, adoption of electronic commerce has differed from one country to another. On one hand, it has been observed that countries leading e-commerce adoption have a large percentage of Internet users as well as of online purchasers; on the other hand, other markets, despite having a low percentage of Internet users, show a high percentage of online buyers. Halfway between those two ends of the spectrum, we find countries such as Spain which, despite having moderately high Internet penetration rates and similar socio-economic characteristics as some of the leading countries, have failed to turn Internet users into active online buyers. Several theoretical approaches have been taken in an attempt to define the factors that influence the use of electronic commerce systems by customers. One of the betterknown frameworks to characterize adoption factors is the acceptance modelling theory, which is derived from the information systems adoption in organizational environments. These models are based on individual perceptions on which factors determine purchase intention, as a mean to explain users’ actual purchasing behaviour. Even though research on electronic commerce adoption models has increased in terms of volume and scope over the last years, the majority of studies validate their hypothesis by using a single sample of consumers from which they obtain general conclusions. Nevertheless, since the birth of marketing, and more specifically from the second half of the 19th century, differences in consumer behaviour owing to demographic, sociologic and psychological characteristics have also been taken into account. And such differences are generally translated into different needs that can only be satisfied when sellers adapt their offer to their target market. Electronic commerce has a number of features that makes it different when compared to traditional commerce; the best example of this is the lack of physical contact between customers and products, and between customers and vendors. Other than that, some differences that depend on the type of product may also play an important role in electronic commerce. From all the above, the present research aims to address the study of the main factors influencing purchase intention and actual purchase behaviour in electronic commerce by Spanish end-consumers, taking into consideration both the customer group to which they belong and the type of product being purchased. In order to achieve this goal, this Thesis is structured in eight chapters: four theoretical sections, three empirical blocks and a final section summarizing the conclusions derived from the research. The chapters are arranged in sequence as follows: introduction, current state of electronic commerce, technology adoption models, electronic commerce segmentation, preliminary design of the empirical work, research design, data analysis and results, and conclusions. The introductory chapter offers a detailed justification of the relevance of this study in the context of e-commerce adoption research; it also sets out the objectives, methodology and research stages. The second chapter further expands and complements the introductory chapter, focusing on two elements: the concept of electronic commerce and its evolution from a general point of view, and the evolution of electronic commerce in Spain and main indicators of adoption. This section is intended to allow the reader to understand the research context, and also to serve as a basis to justify the relevance and representativeness of the sample used in this study. Chapters three (technology acceptance models) and four (segmentation in electronic commerce) set the theoretical foundations for the study. Chapter 3 presents a thorough literature review of technology adoption modelling, focusing on previous studies on electronic commerce acceptance. As a result of the literature review, the research framework is built upon a model based on UTAUT (Unified Theory of Acceptance and Use of Technology) and its evolution, UTAUT2, including two specific electronic commerce adoption factors: perceived risk and perceived trust. Chapter 4 deals with client and product segmentation methodologies used by experts. From the literature review, a wide range of classification variables is studied, and a shortlist of nine classification variables has been selected for inclusion in the research. The criteria for variable selection were their adequacy to electronic commerce characteristics, as well as adequacy to the sample characteristics. The nine variables have been classified in three groups: socio-demographic (gender, age, education level, income, family size and relationship status), behavioural (experience in electronic commerce and frequency of purchase) and psychographic (online purchase motivations) variables. The second half of chapter 4 is devoted to a review of the product classification criteria in electronic commerce. The review has led to the identification of a final set of fifteen groups of variables, whose combination offered a total of thirty-four possible outputs. However, due to the lack of empirical evidence in the context of electronic commerce, further investigation on the validity of this set of product classifications was deemed necessary. For this reason, chapter 5 proposes an empirical study to test the different product classification variables with 207 samples. A selection of product classifications including only those variables that are objective, able to identify distinct groups and not dependent on consumers’ point of view, led to a final classification of products which consisted on two groups of variables for the final empirical study. The combination of these two groups gave rise to four types of products: digital and non-digital goods, and digital and non-digital services. Chapter six characterizes the research –social, exploratory research– and presents the final research model and research hypotheses. The exploratory nature of the research becomes patent in instances where no prior empirical evidence on the influence of certain segmentation variables was found. Chapter six also includes the description of the measurement instrument used in the research, consisting of a total of 125 questions –and the measurement scales associated to each of them– as well as the description of the sample used for model validation (consisting of 817 Spanish residents). Chapter 7 is the core of the empirical analysis performed to validate the research model, and it is divided into two separate parts: description of the statistical techniques used for data analysis, and actual data analysis and results. The first part is structured in three different blocks: Partial Least Squares Method (PLS): the multi-variable analysis is a statistical method used to determine structural relationships of models and their predictive validity; Multi-group analysis: a set of techniques that allow comparing the outcomes of PLS analysis between two or more groups, by using one or more segmentation variables. More specifically, five comparison methods were used, which additionally gives the opportunity to assess the efficiency of each method. Determination of a priori undefined segments: in some cases, classification criteria did not necessarily exist for some segmentation variables, such as customer motivations. In these cases, the application of statistical classification techniques is required. For this study, two main classification techniques were used sequentially: principal component factor analysis –in order to reduce the number of variables– and cluster analysis. The application of the statistical methods to the models derived from the inclusion of the various segmentation criteria –for both clients and products–, led to the analysis of 128 different electronic commerce adoption models and 65 multi group comparisons. Finally, chapter 8 summarizes the conclusions from the research, divided into four parts: first, an assessment of the degree of achievement of the different research objectives is offered; then, methodological, theoretical and practical implications of the research are drawn; this is followed by a discussion on the results from the empirical study –based on the segmentation criteria for the research–; fourth, and last, the main limitations of the research –both empirical and theoretical– as well as future avenues of research are detailed.
Resumo:
habilidades de comprensión y resolución de problemas. Tanto es así que se puede afirmar con rotundidad que no existe el método perfecto para cada una de las etapas de desarrollo y tampoco existe el modelo de ciclo de vida perfecto: cada nuevo problema que se plantea es diferente a los anteriores en algún aspecto y esto hace que técnicas que funcionaron en proyectos anteriores fracasen en los proyectos nuevos. Por ello actualmente se realiza un planteamiento integrador que pretende utilizar en cada caso las técnicas, métodos y herramientas más acordes con las características del problema planteado al ingeniero. Bajo este punto de vista se plantean nuevos problemas. En primer lugar está la selección de enfoques de desarrollo. Si no existe el mejor enfoque, ¿cómo se hace para elegir el más adecuado de entre el conjunto de los existentes? Un segundo problema estriba en la relación entre las etapas de análisis y diseño. En este sentido existen dos grandes riesgos. Por un lado, se puede hacer un análisis del problema demasiado superficial, con lo que se produce una excesiva distancia entre el análisis y el diseño que muchas veces imposibilita el paso de uno a otro. Por otro lado, se puede optar por un análisis en términos del diseño que provoca que no cumpla su objetivo de centrarse en el problema, sino que se convierte en una primera versión de la solución, lo que se conoce como diseño preliminar. Como consecuencia de lo anterior surge el dilema del análisis, que puede plantearse como sigue: para cada problema planteado hay que elegir las técnicas más adecuadas, lo que requiere que se conozcan las características del problema. Para ello, a su vez, se debe analizar el problema, eligiendo una técnica antes de conocerlo. Si la técnica utiliza términos de diseño entonces se ha precondicionado el paradigma de solución y es posible que no sea el más adecuado para resolver el problema. En último lugar están las barreras pragmáticas que frenan la expansión del uso de métodos con base formal, dificultando su aplicación en la práctica cotidiana. Teniendo en cuenta todos los problemas planteados, se requieren métodos de análisis del problema que cumplan una serie de objetivos, el primero de los cuales es la necesidad de una base formal, con el fin de evitar la ambigüedad y permitir verificar la corrección de los modelos generados. Un segundo objetivo es la independencia de diseño: se deben utilizar términos que no tengan reflejo directo en el diseño, para que permitan centrarse en las características del problema. Además los métodos deben permitir analizar problemas de cualquier tipo: algorítmicos, de soporte a la decisión o basados en el conocimiento, entre otros. En siguiente lugar están los objetivos relacionados con aspectos pragmáticos. Por un lado deben incorporar una notación textual formal pero no matemática, de forma que se facilite su validación y comprensión por personas sin conocimientos matemáticos profundos pero al mismo tiempo sea lo suficientemente rigurosa para facilitar su verificación. Por otro lado, se requiere una notación gráfica complementaria para representar los modelos, de forma que puedan ser comprendidos y validados cómodamente por parte de los clientes y usuarios. Esta tesis doctoral presenta SETCM, un método de análisis que cumple estos objetivos. Para ello se han definido todos los elementos que forman los modelos de análisis usando una terminología independiente de paradigmas de diseño y se han formalizado dichas definiciones usando los elementos fundamentales de la teoría de conjuntos: elementos, conjuntos y relaciones entre conjuntos. Por otro lado se ha definido un lenguaje formal para representar los elementos de los modelos de análisis – evitando en lo posible el uso de notaciones matemáticas – complementado con una notación gráfica que permite representar de forma visual las partes más relevantes de los modelos. El método propuesto ha sido sometido a una intensa fase de experimentación, durante la que fue aplicado a 13 casos de estudio, todos ellos proyectos reales que han concluido en productos transferidos a entidades públicas o privadas. Durante la experimentación se ha evaluado la adecuación de SETCM para el análisis de problemas de distinto tamaño y en sistemas cuyo diseño final usaba paradigmas diferentes e incluso paradigmas mixtos. También se ha evaluado su uso por analistas con distinto nivel de experiencia – noveles, intermedios o expertos – analizando en todos los casos la curva de aprendizaje, con el fin de averiguar si es fácil de aprender su uso, independientemente de si se conoce o no alguna otra técnica de análisis. Por otro lado se ha estudiado la capacidad de ampliación de modelos generados con SETCM, para comprobar si permite abordar proyectos realizados en varias fases, en los que el análisis de una fase consista en ampliar el análisis de la fase anterior. En resumidas cuentas, se ha tratado de evaluar la capacidad de integración de SETCM en una organización como la técnica de análisis preferida para el desarrollo de software. Los resultados obtenidos tras esta experimentación han sido muy positivos, habiéndose alcanzado un alto grado de cumplimiento de todos los objetivos planteados al definir el método.---ABSTRACT---Software development is an inherently complex activity, which requires specific abilities of problem comprehension and solving. It is so difficult that it can even be said that there is no perfect method for each of the development stages and that there is no perfect life cycle model: each new problem is different to the precedent ones in some respect and the techniques that worked in other problems can fail in the new ones. Given that situation, the current trend is to integrate different methods, tools and techniques, using the best suited for each situation. This trend, however, raises some new problems. The first one is the selection of development approaches. If there is no a manifestly single best approach, how does one go about choosing an approach from the array of available options? The second problem has to do with the relationship between the analysis and design phases. This relation can lead to two major risks. On one hand, the analysis could be too shallow and far away from the design, making it very difficult to perform the transition between them. On the other hand, the analysis could be expressed using design terminology, thus becoming more a kind of preliminary design than a model of the problem to be solved. In third place there is the analysis dilemma, which can be expressed as follows. The developer has to choose the most adequate techniques for each problem, and to make this decision it is necessary to know the most relevant properties of the problem. This implies that the developer has to analyse the problem, choosing an analysis method before really knowing the problem. If the chosen technique uses design terminology then the solution paradigm has been preconditioned and it is possible that, once the problem is well known, that paradigm wouldn’t be the chosen one. The last problem consists of some pragmatic barriers that limit the applicability of formal based methods, making it difficult to use them in current practice. In order to solve these problems there is a need for analysis methods that fulfil several goals. The first one is the need of a formal base, which prevents ambiguity and allows the verification of the analysis models. The second goal is design-independence: the analysis should use a terminology different from the design, to facilitate a real comprehension of the problem under study. In third place the analysis method should allow the developer to study different kinds of problems: algorithmic, decision-support, knowledge based, etc. Next there are two goals related to pragmatic aspects. Firstly, the methods should have a non mathematical but formal textual notation. This notation will allow people without deep mathematical knowledge to understand and validate the resulting models, without losing the needed rigour for verification. Secondly, the methods should have a complementary graphical notation to make more natural the understanding and validation of the relevant parts of the analysis. This Thesis proposes such a method, called SETCM. The elements conforming the analysis models have been defined using a terminology that is independent from design paradigms. Those terms have been then formalised using the main concepts of the set theory: elements, sets and correspondences between sets. In addition, a formal language has been created, which avoids the use of mathematical notations. Finally, a graphical notation has been defined, which can visually represent the most relevant elements of the models. The proposed method has been thoroughly tested during the experimentation phase. It has been used to perform the analysis of 13 actual projects, all of them resulting in transferred products. This experimentation allowed evaluating the adequacy of SETCM for the analysis of problems of varying size, whose final design used different paradigms and even mixed ones. The use of the method by people with different levels of expertise was also evaluated, along with the corresponding learning curve, in order to assess if the method is easy to learn, independently of previous knowledge on other analysis techniques. In addition, the expandability of the analysis models was evaluated, assessing if the technique was adequate for projects organised in incremental steps, in which the analysis of one step grows from the precedent models. The final goal was to assess if SETCM can be used inside an organisation as the preferred analysis method for software development. The obtained results have been very positive, as SETCM has obtained a high degree of fulfilment of the goals stated for the method.
Resumo:
En esta Tesis Doctoral se emplean y desarrollan Métodos Bayesianos para su aplicación en análisis geotécnicos habituales, con un énfasis particular en (i) la valoración y selección de modelos geotécnicos basados en correlaciones empíricas; en (ii) el desarrollo de predicciones acerca de los resultados esperados en modelos geotécnicos complejos. Se llevan a cabo diferentes aplicaciones a problemas geotécnicos, como es el caso de: (1) En el caso de rocas intactas, se presenta un método Bayesiano para la evaluación de modelos que permiten estimar el módulo de Young a partir de la resistencia a compresión simple (UCS). La metodología desarrollada suministra estimaciones de las incertidumbres de los parámetros y predicciones y es capaz de diferenciar entre las diferentes fuentes de error. Se desarrollan modelos "específicos de roca" para los tipos de roca más comunes y se muestra cómo se pueden "actualizar" esos modelos "iniciales" para incorporar, cuando se encuentra disponible, la nueva información específica del proyecto, reduciendo las incertidumbres del modelo y mejorando sus capacidades predictivas. (2) Para macizos rocosos, se presenta una metodología, fundamentada en un criterio de selección de modelos, que permite determinar el modelo más apropiado, entre un conjunto de candidatos, para estimar el módulo de deformación de un macizo rocoso a partir de un conjunto de datos observados. Una vez que se ha seleccionado el modelo más apropiado, se emplea un método Bayesiano para obtener distribuciones predictivas de los módulos de deformación de macizos rocosos y para actualizarlos con la nueva información específica del proyecto. Este método Bayesiano de actualización puede reducir significativamente la incertidumbre asociada a la predicción, y por lo tanto, afectar las estimaciones que se hagan de la probabilidad de fallo, lo cual es de un interés significativo para los diseños de mecánica de rocas basados en fiabilidad. (3) En las primeras etapas de los diseños de mecánica de rocas, la información acerca de los parámetros geomecánicos y geométricos, las tensiones in-situ o los parámetros de sostenimiento, es, a menudo, escasa o incompleta. Esto plantea dificultades para aplicar las correlaciones empíricas tradicionales que no pueden trabajar con información incompleta para realizar predicciones. Por lo tanto, se propone la utilización de una Red Bayesiana para trabajar con información incompleta y, en particular, se desarrolla un clasificador Naïve Bayes para predecir la probabilidad de ocurrencia de grandes deformaciones (squeezing) en un túnel a partir de cinco parámetros de entrada habitualmente disponibles, al menos parcialmente, en la etapa de diseño. This dissertation employs and develops Bayesian methods to be used in typical geotechnical analyses, with a particular emphasis on (i) the assessment and selection of geotechnical models based on empirical correlations; on (ii) the development of probabilistic predictions of outcomes expected for complex geotechnical models. Examples of application to geotechnical problems are developed, as follows: (1) For intact rocks, we present a Bayesian framework for model assessment to estimate the Young’s moduli based on their UCS. Our approach provides uncertainty estimates of parameters and predictions, and can differentiate among the sources of error. We develop ‘rock-specific’ models for common rock types, and illustrate that such ‘initial’ models can be ‘updated’ to incorporate new project-specific information as it becomes available, reducing model uncertainties and improving their predictive capabilities. (2) For rock masses, we present an approach, based on model selection criteria to select the most appropriate model, among a set of candidate models, to estimate the deformation modulus of a rock mass, given a set of observed data. Once the most appropriate model is selected, a Bayesian framework is employed to develop predictive distributions of the deformation moduli of rock masses, and to update them with new project-specific data. Such Bayesian updating approach can significantly reduce the associated predictive uncertainty, and therefore, affect our computed estimates of probability of failure, which is of significant interest to reliability-based rock engineering design. (3) In the preliminary design stage of rock engineering, the information about geomechanical and geometrical parameters, in situ stress or support parameters is often scarce or incomplete. This poses difficulties in applying traditional empirical correlations that cannot deal with incomplete data to make predictions. Therefore, we propose the use of Bayesian Networks to deal with incomplete data and, in particular, a Naïve Bayes classifier is developed to predict the probability of occurrence of tunnel squeezing based on five input parameters that are commonly available, at least partially, at design stages.
Resumo:
In this Project, a preliminary design of a dehydration unit for domestic gas will be outlined. This unit that is the subject of the study belongs to a project named Gorgon. Such project is currently been developed by Chevron in Barrow Island, Australia. In order to conduct a proper design of such unit, characteristics of the natural gas that is being extracted shall be detailed, as well as proper specifications of the pipeline to which the gas will supply. After this, different techniques for dehydrating the gas are evaluated; the technique that fits better this Project is absorption by glycol and following such assumption will be chosen as the best one. More accurately, the most suitable type of glycol for this particular unit is triethilene glycol, considering that it fits better the conditions of the project. Once the method is chosen, a simulation shall be undertaken with the purpose of determining the number of stages required for the correct functioning of the unit, the glycol rate and its purity. Besides, it is needed to estimate its pressure and temperature and the dimensions that would then follow. In addition, pressures and temperatures are estimated at the regeneration glycol process, together with dimensions of some units. Furthermore, it is necessary to estimate pressure and temperature at which natural gas is leaving the dehydration unit. In addition, both compression needed to secure the flux at the pipeline and the resulting pressure at the reception shall be studied. Finally, an economic study is carried out in order to conclude whether or not this specific Project is feasible.
Resumo:
Tritium release experiments using different breeding material candidates are planned for the medium flux region of the IFMIF Test Cell. Nowadays, only ceramic breeder materials have been suggested to be tested in the Tritium Release Module located in the Medium Flux Test Module of IFMIF. Liquid breeder blankets are very promising and for that reason, several concepts will be tested in ITER. One of the main problems concerning the liquid blankets is the permeation of the generated tritium in the breeder throughout the walls. Since tritium permeation is highly influenced by irradiation conditions, IFMIF is a suitable scenario to perform tritium permeation related experiments. In this paper, a preliminary design of a tritium permeation experiment for the Medium Flux Test Module of IFMIF is proposed, in order to contribute to the progress of the liquid breeder blanket concept validation. The conceptual design of the capsule in which the experiment will be performed is carried out, taking into consideration the experiment necessities and its implementation in the Tritium Release Module. In addition to this, some thermal hydraulic calculations have been performed to evaluate the thermal behaviour of the irradiation capsule
Resumo:
A Space tether is a thin, multi-kilometers long conductive wire, joining a satellite and some opposite end mass, and keeping vertical in orbit by the gravity-gradient. The ambient plasma, being highly conductive, is equipotential in its own co-moving frame. In the tether frame, in relative motion however, there is in the plasma a motional electric field of order of 100 V/km, product of (near) orbital velocity and geomagnetic field. The electromotive force established over the tether length allows plasma contactor devices to collect electrons at one polarized-positive (anodic) end and eject electrons at the opposite end, setting up a current along a standard, fully insulated tether. The Lorentz force exerted on the current by the geomagnetic field itself is always drag; this relies on just thermodynamics, like air drag. The bare tether concept, introduced in 1992 at the Universidad Politécnica de Madrid (UPM), takes away the insulation and has electrons collected over the tether segment coming out polarized positive; the concept rests on 2D (Langmuir probe) current-collection in plasmas being greatly more efficient than 3D collection. A Plasma Contactor ejects electrons at the cathodic end. A bare tether with a thin-tape cross section has much greater perimeter and de-orbits much faster than a (corresponding) round bare tether of equal length and mass. Further, tethers being long and thin, they are prone to cuts by abundant small space debris, but BETs has shown that the tape has a probability of being cut per unit time smaller by more than one order of magnitude than the corresponding round tether (debris comparable to its width are much less abundant than debris comparable to the radius of the corresponding round tether). Also, the tape collects much more current, and de-orbits much faster, than a corresponding multi-line “tape” made of thin round wires cross-connected to survive debris cuts. Tethers use a dissipative mechanism quite different from air drag and can de-orbit in just a few months; also, tape tethers are much lighter than round tethers of equal length and perimeter, which can capture equal current. The 3 disparate tape dimensions allow easily scalable design. Switching the cathodic Contactor off-on allows maneuvering to avoid catastrophic collisions with big tracked debris. Lorentz braking is as reliable as air drag. Tethers are still reasonably effective at high inclinations, where the motional field is small, because the geomagnetic field is not just a dipole along the Earth polar axis. BETs is the EC FP7/Space Project 262972, financed in about 1.8 million euros, from 1 November 2010 to 31 January 2014, and carrying out RTD work on de-orbiting space debris. Coordinated by UPM, it has partners Università di Padova, ONERA-Toulouse, Colorado State University, SME Emxys, DLR–Bremen, and Fundación Tecnalia. BETs work involves 1) Designing, building, and ground-testing basic hardware subsystems Cathodic Plasma Contactor, Tether Deployment Mechanism, Power Control Module, and Tape with crosswise and lengthwise structure. 2) Testing current collection and verifying tether dynamical stability. 3) Preliminary design of tape dimensions for a generic mission, conducive to low system-to-satellite mass ratio and probability of cut by small debris, and ohmic-effects regime of tether current for fast de-orbiting. Reaching TRL 4-5, BETs appears ready for in-orbit demostration.
Resumo:
La zona de Madrid al Este del Retiro ha estado indefectiblemente condicionada en su tardío desarrollo urbano por su posición a espaldas del Real Sitio. La construcción hacia 1640 de las tapias que rodeaban los reales jardines transformó la red de caminos que partían hacia oriente, aisló los terrenos ubicados más al Este de la ciudad, con la que ya sólo se podrían comunicar por las carreteras de Aragón y Valencia, y condenó las expectativas de desarrollo urbano reduciendo los precios de las propiedades, lo cual determinó durante décadas los usos y la arquitectura de la zona. El Anteproyecto de Ensanche de Carlos María de Castro constituye el germen a partir del cual, durante un lento proceso de casi cien años, fue configurándose la ciudad que hoy conocemos. La identificación en el Archivo de Villa del primer plano general del Ensanche trazado por Castro, del cual anteriores trabajos advirtieron de su existencia aunque se desconocía su localización, es la principal aportación de esta investigación. Por un lado, este primer plano general del Ensanche manuscrito es, por sí mismo, un documento de indudable importancia en la historia del urbanismo madrileño. En segundo lugar, el análisis de su contenido arroja nueva luz sobre la propuesta original de Castro, parcialmente censurada por la Dirección General de Obras Públicas antes de la aprobación del plan en 1860. Especialmente en lo referente a la zona de Madrid al Este del Retiro, proyectada como barrio obrero del Ensanche, este documento ha aportado un enfoque desconocido hasta ahora sobre el paisaje urbano concebido por Castro para la más ambiciosa propuesta planteada en mucho tiempo al problema de la vivienda obrera. Finalmente, el análisis de la factura del plano revela la superposición de varias capas de dibujo, evidenciando que durante un tiempo fue un documento vivo, utilizado como plano de trabajo por el equipo de Castro durante aproximadamente diez años, hasta la destitución del ingeniero en 1868. Posteriores análisis del plano sobre otros ámbitos de la ciudad arrojarán sin duda nuevos datos sobre el proceso proyectual del conjunto del Ensanche. Pero la dinámica de lo real, sintetizable en múltiples factores de índole social, económica y legislativa, transformó durante las primeras décadas de andadura del Ensanche la ciudad proyectada por Castro al Este del Retiro. El dibujo de la ciudad, entendido como herramienta de análisis y empleado con éxito en trabajos de investigación realizados por otros autores en la misma línea, ha permitido deducir la reconstitución gráfica del estado de la ciudad en diferentes momentos singulares del desarrollo urbanístico de la zona, así como de la propuesta original de barrio obrero de Castro. No hay que olvidar que, a pesar del escaso interés que suscitaba entre los inversores inmobiliarios el ámbito geográfico de estudio de esta tesis, fue objeto, durante casi un siglo, de numerosas propuestas de ordenación y urbanización que, aunque no llegaron a materializarse, fueron configurando una suerte de desarrollo virtual de la ciudad paralelo al devenir de la realidad. De esta forma, el dibujo se constituye en esta tesis como fuente de información, herramienta de pensamiento y resultado de la investigación en sí mismo, ilustrando y contribuyendo al mejor conocimiento de la forma urbana. ABSTRACT The area of Madrid to the East of the Retiro has been inevitably conditioned in its late urban development by its position behind the Royal Site. The construction of the walls surrounding the royal gardens around 1640 transformed the network of roads departing eastward, isolated land located to the East of the city, with which already only could communicate by roads of Aragon and Valencia, condemned the expectations of urban development by reducing the prices of the properties, and determined for decades uses and architecture of the area. The Carlos María de Castro preliminary design of City Expansion is the germ from which, during a slow process of almost one hundred years, the city which we know today was setting up. The discovery in the City Archive of the City Expansion first drawing traced by Castro, which previous investigations warned of its existence although its location was unknown, is the main contribution of this research. Firstly, this hand drawn general plan of the city expansion is by itself a document of undoubted importance in the history of Madrid urbanism. Secondly, the analysis of its content sheds new light on Castro´s original proposal, partially censored by the Dirección General de Obras Públicas before the approval of the plan in 1860. Especially concerning the area of Madrid to the East of the Retiro, projected as a workingclass district of the City Expansion, this document has provided an unknown up to now approach on the urban landscape designed by Castro for the more ambitious proposal put forward in a long time to the problem of worker housing. Finally, analysis of hand drawn plan reveals the superposition of several layers of drawing, demonstrating that for a time it was a living document, used as a work plan by the Castro team for approximately ten years, until the dismissal of the engineer in 1868. Subsequent analysis of the drawing on other areas of the city will have no doubt new data on the design process of the whole City Expansion. But the dynamics of reality, synthesizable on multiple factors in social, economic and legislative, transformed during the first decades of existence of the City Expansion designed by Castro to the East of the Retiro. Drawing of the city, understood as a tool of analysis and used successfully in research works done by other authors on the same line, has allowed to deduct graphic reconstitution of the city status in different and singular moments in the urban development of the area, as well as the original Castro´s proposal of working-class district. It should not be forgotten that, despite the lack of interest which raised among investors the geographic scope of this thesis study, it was the object, for nearly a century, of numerous proposals for urbanization which, although they didn´t materialize, were setting up a sort of virtual development of the city parallel to the becoming of the reality. In this way, drawing is used in the thesis as a source of information, tool of thought and outcome of the research itself, illustrating and contributing to a better understanding of urban form.
Resumo:
In this paper, a rapid method for spacecraft sizing is presented. This method is useful in both the conceptual and preliminary design phases of scientific and communication satellites. The aim of this method is to provide a sizing procedure similar to the ones used in the design of aircraft; actually by determining the mass of all the spacecraft subsystems. In the Introduction, the importance of an accurate initial mass budget in the design of satellites is emphasized. Literature about this topic is not very extensive and most of the existing methods have been recapitulated. The methodology followed in the proposed procedure for spacecraft mass sizing is based on these methods. Data from 26 existing satellites have been considered to obtain correlations between each subsystem mass and the mass of the whole spacecraft.
Resumo:
Modern stepped spillways are typically designed for large discharge capacities corresponding to a skimming flow regime for which flow resistance is predominantly form drag. The writer demonstrates that the inflow conditions have some effect on the skimming flow properties. Boundary layer calculations show that the flow properties at inception of free-surface aeration are substantially different with pressurized intake. The re-analysis of experimental results highlights that the equivalent Darcy friction factor is f similar to 0.2 in average on uncontrolled stepped Chute and f similar to 0.1 on stepped chute with pressurized intake. A simple design chart is presented to estimate the residual flow velocity, and the agreement of the calculations with experimental results is deemed satisfactory for preliminary design.
Resumo:
Glass reinforced plastic (GRP) is now an established material for the fabrication of sonar windows. Its good mechanical strength, light weight, resistance to corrosion and acoustic transparency, are all properties which fit it for this application. This thesis describes a study, undertaken at the Royal Naval Engineering College, Plymouth, into the mechanical behaviour of a circular cylindrical sonar panel. This particular type of panel would be used to cover a flank array sonar in a ship or submarine. The case considered is that of a panel with all of its edges mechanically clamped and subject to pressure loading on its convex surface. A comprehensive program of testing, to determine the orthotropic elastic properties of the laminated composite panel material is described, together with a series of pressure tests on 1:5 scale sonar panels. These pressure tests were carried out in a purpose designed test rig, using air pressure to provide simulated hydrostatic and hydrodynamic loading. Details of all instrumentation used in the experimental work are given in the thesis. The experimental results from the panel testing are compared with predictions of panel behaviour obtained from both the Galerkin solution of Flugge's cylindrical shell equations (orthotropic case), and finite element modelling of the panels using PAFEC. A variety of appropriate panel boundary conditions are considered in each case. A parametric study, intended to be of use as a preliminary design tool, and based on the above Galerkin solution, is also presented. This parametric study considers cases of boundary conditions, material properties, and panel geometry, outside of those investigated in the experimental work Final conclusions are drawn and recommendations made regarding possible improvements to the procedures for design, manufacture and fixing of sonar panels in the Royal Navy.
Resumo:
Trabalho de Projeto para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização em Estruturas
Resumo:
The PhD project addresses the potential of using concentrating solar power (CSP) plants as a viable alternative energy producing system in Libya. Exergetic, energetic, economic and environmental analyses are carried out for a particular type of CSP plants. The study, although it aims a particular type of CSP plant – 50 MW parabolic trough-CSP plant, it is sufficiently general to be applied to other configurations. The novelty of the study, in addition to modeling and analyzing the selected configuration, lies in the use of a state-of-the-art exergetic analysis combined with the Life Cycle Assessment (LCA). The modeling and simulation of the plant is carried out in chapter three and they are conducted into two parts, namely: power cycle and solar field. The computer model developed for the analysis of the plant is based on algebraic equations describing the power cycle and the solar field. The model was solved using the Engineering Equation Solver (EES) software; and is designed to define the properties at each state point of the plant and then, sequentially, to determine energy, efficiency and irreversibility for each component. The developed model has the potential of using in the preliminary design of CSPs and, in particular, for the configuration of the solar field based on existing commercial plants. Moreover, it has the ability of analyzing the energetic, economic and environmental feasibility of using CSPs in different regions of the world, which is illustrated for the Libyan region in this study. The overall feasibility scenario is completed through an hourly analysis on an annual basis in chapter Four. This analysis allows the comparison of different systems and, eventually, a particular selection, and it includes both the economic and energetic components using the “greenius” software. The analysis also examined the impact of project financing and incentives on the cost of energy. The main technological finding of this analysis is higher performance and lower levelized cost of electricity (LCE) for Libya as compared to Southern Europe (Spain). Therefore, Libya has the potential of becoming attractive for the establishment of CSPs in its territory and, in this way, to facilitate the target of several European initiatives that aim to import electricity generated by renewable sources from North African and Middle East countries. The analysis is presented a brief review of the current cost of energy and the potential of reducing the cost from parabolic trough- CSP plant. Exergetic and environmental life cycle assessment analyses are conducted for the selected plant in chapter Five; the objectives are 1) to assess the environmental impact and cost, in terms of exergy of the life cycle of the plant; 2) to find out the points of weakness in terms of irreversibility of the process; and 3) to verify whether solar power plants can reduce environmental impact and the cost of electricity generation by comparing them with fossil fuel plants, in particular, Natural Gas Combined Cycle (NGCC) plant and oil thermal power plant. The analysis also targets a thermoeconomic analysis using the specific exergy costing (SPECO) method to evaluate the level of the cost caused by exergy destruction. The main technological findings are that the most important contribution impact lies with the solar field, which reports a value of 79%; and the materials with the vi highest impact are: steel (47%), molten salt (25%) and synthetic oil (21%). The “Human Health” damage category presents the highest impact (69%) followed by the “Resource” damage category (24%). In addition, the highest exergy demand is linked to the steel (47%); and there is a considerable exergetic demand related to the molten salt and synthetic oil with values of 25% and 19%, respectively. Finally, in the comparison with fossil fuel power plants (NGCC and Oil), the CSP plant presents the lowest environmental impact, while the worst environmental performance is reported to the oil power plant followed by NGCC plant. The solar field presents the largest value of cost rate, where the boiler is a component with the highest cost rate among the power cycle components. The thermal storage allows the CSP plants to overcome solar irradiation transients, to respond to electricity demand independent of weather conditions, and to extend electricity production beyond the availability of daylight. Numerical analysis of the thermal transient response of a thermocline storage tank is carried out for the charging phase. The system of equations describing the numerical model is solved by using time-implicit and space-backward finite differences and which encoded within the Matlab environment. The analysis presented the following findings: the predictions agree well with the experiments for the time evolution of the thermocline region, particularly for the regions away from the top-inlet. The deviations observed in the near-region of the inlet are most likely due to the high-level of turbulence in this region due to the localized level of mixing resulting; a simple analytical model to take into consideration this increased turbulence level was developed and it leads to some improvement of the predictions; this approach requires practically no additional computational effort and it relates the effective thermal diffusivity to the mean effective velocity of the fluid at each particular height of the system. Altogether the study indicates that the selected parabolic trough-CSP plant has the edge over alternative competing technologies for locations where DNI is high and where land usage is not an issue, such as the shoreline of Libya.
Resumo:
Trabalho de Projeto para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização em Estruturas
Resumo:
Methanol is an important and versatile compound with various uses as a fuel and a feedstock chemical. Methanol is also a potential chemical energy carrier. Due to the fluctuating nature of renewable energy sources such as wind or solar, storage of energy is required to balance the varying supply and demand. Excess electrical energy generated at peak periods can be stored by using the energy in the production of chemical compounds. The conventional industrial production of methanol is based on the gas-phase synthesis from synthesis gas generated from fossil sources, primarily natural gas. Methanol can also be produced by hydrogenation of CO2. The production of methanol from CO2 captured from emission sources or even directly from the atmosphere would allow sustainable production based on a nearly limitless carbon source, while helping to reduce the increasing CO2 concentration in the atmosphere. Hydrogen for synthesis can be produced by electrolysis of water utilizing renewable electricity. A new liquid-phase methanol synthesis process has been proposed. In this process, a conventional methanol synthesis catalyst is mixed in suspension with a liquid alcohol solvent. The alcohol acts as a catalytic solvent by enabling a new reaction route, potentially allowing the synthesis of methanol at lower temperatures and pressures compared to conventional processes. For this thesis, the alcohol promoted liquid phase methanol synthesis process was tested at laboratory scale. Batch and semibatch reaction experiments were performed in an autoclave reactor, using a conventional Cu/ZnO catalyst and ethanol and 2-butanol as the alcoholic solvents. Experiments were performed at the pressure range of 30-60 bar and at temperatures of 160-200 °C. The productivity of methanol was found to increase with increasing pressure and temperature. In the studied process conditions a maximum volumetric productivity of 1.9 g of methanol per liter of solvent per hour was obtained, while the maximum catalyst specific productivity was found to be 40.2 g of methanol per kg of catalyst per hour. The productivity values are low compared to both industrial synthesis and to gas-phase synthesis from CO2. However, the reaction temperatures and pressures employed were lower compared to gas-phase processes. While the productivity is not high enough for large-scale industrial operation, the milder reaction conditions and simple operation could prove useful for small-scale operations. Finally, a preliminary design for an alcohol promoted, liquid-phase methanol synthesis process was created using the data obtained from the experiments. The demonstration scale process was scaled to an electrolyzer unit producing 1 Nm3 of hydrogen per hour. This Master’s thesis is closely connected to LUT REFLEX-platform.