977 resultados para Standard Time
Resumo:
Continued by another publication, with same title, October, 1906 (Cd. 3245).
Resumo:
Mode of access: Internet.
Resumo:
In this paper we propose the use of the least-squares based methods for obtaining digital rational approximations (IIR filters) to fractional-order integrators and differentiators of type sα, α∈R. Adoption of the Padé, Prony and Shanks techniques is suggested. These techniques are usually applied in the signal modeling of deterministic signals. These methods yield suboptimal solutions to the problem which only requires finding the solution of a set of linear equations. The results reveal that the least-squares approach gives similar or superior approximations in comparison with other widely used methods. Their effectiveness is illustrated, both in the time and frequency domains, as well in the fractional differintegration of some standard time domain functions.
Resumo:
On the Limits of Greenwich Mean Time, or The Failure of a Modernist Revolution From the introduction of World Standard Time in 1884 to Einstein’s theory of relativity, the nature and regulation of time was a highly contested issue in modernism, with profound political, social and epistemological consequences. Modernist aesthetic sensibilities widely revolted against the increasingly strict rule of the clock, which, as Georg Simmel observed in “The Metropolis and Mental Life,” was established as the necessary basis of a capitalist, urban life. This paper will focus on the contending conceptions of time arising in key modernist texts by authors like Joyce, Woolf and Conrad. I will argue that the uniformity and regularity of time necessary to a rising capitalist society came under attack in a similar way by both modernist literary aesthetics and new scientific discoveries. However, while Einstein’s theory of relativity may have led to a subsequent change of paradigm in scientific thought, it has failed to significantly alter social and popular conceptions of time. Although alternative ways of thinking and living with time are proposed by modernist authors, they remain isolated aesthetic experiments, ineffectual against the regulatory pressure of economic and social structures. In this struggle about the nature of time, so I suggest, science and literature join force against a society that is increasingly governed by economic reason. The fact that they lost this struggle can serve as a striking illustration of an increasing shift of social influence from science and art towards economy.
Resumo:
Dissertação apresentada ao Instituto Politécnico do Porto para obtenção do Grau de Mestre em Gestão das Organizações, Ramo de Gestão de Empresas Orientada por Prof. Doutora Maria Alexandra Pacheco Ribeiro da Costa Esta dissertação inclui as críticas e sugestões feitas pelo júri.
Resumo:
Este trabalho foi realizado no âmbito do Mestrado em Engenharia Mecânica, especialização em Gestão Industrial, do Instituto Superior de Engenharia do Porto. O estudo foi desenvolvido na Continental Mabor – Indústria de Pneus S.A., sendo analisado o processo de Inspeção Visual dos pneus. Face à atual conjuntura de mercado, as empresas devem estar munidas de dados detalhados e precisos relativos aos seus processos produtivos. A Capacidade instalada apresenta-secomo um parâmetro determinante na medida em que condiciona diretamente a resposta a solicitações de clientes. Esta é fortemente influenciada pelo Layout fabril, pelo que a otimização do mesmo é fundamental numa perspetiva de ganho de Capacidade produtiva. O relatório iniciou-se com a determinação do Tempo Previsto da operação segundo o referencial REFA. Seguidamente quantificaram-se as atuais perturbações através de auditorias ao processo. Deste modo obteve-se uma Capacidade instalada de 59380 pneus/dia. A análise das perturbações desenvolveu-se a partir de um diagrama causa-efeito, no qual foram identificadas diversas potenciais causas, classificadas posteriormente por uma equipa experiente e conhecedora do processo. Assim, conhecidas as perturbações de maior impacto, foi apresentada uma solução de Layout que visou a sua minimização. O ganho estimado, em termos de Capacidade, após a implementação da solução proposta é de 3000 pneus/dia. Este ganho de 5% é significativo na medida em que é obtido sem a necessidade de aquisição de novos equipamentos nem de área fabril adicional. É expectável que esta implementação proporcione ainda melhorias no processo produtivo subsequente - Uniformidade, especificamente na alimentação do mesmo. A quantificação desta melhoria, na sequência deste trabalho, apresenta-se como uma oportunidade de estudo futuro.
Resumo:
Este relatório de estágio foi desenvolvido no âmbito do estágio de final de curso, em ambiente industrial, do Mestrado em Engenharia e Gestão Industrial da ESEIG. O estágio foi realizado na Continental Mabor S.A., situada em Vila Nova de Famalicão, mais especificamente no Departamento de Engenharia Industrial. A indústria dos pneus está inserida num mercado cada vez mais competitivo e exigente, em que os custos e os prazos de entrega devem ser cada vez menores e a qualidade do produto cada vez maior. Por esses motivos, é necessária uma constante melhoria dos processos produtivos. Para que este esforço de melhoria contínua seja bem-sucedido é necessário recorrer por vezes a prémios de produção, para incentivar uma maior produtividade. Contudo cada pneu tem tempos de produção diferentes de máquina para máquina e é preciso corrigir esses tempos em prol de uma justa atribuição de prémios. O presente trabalho teve como principal objetivo atualizar as perturbações e atualizar os tempos-máquina no setor de construção de pneus. Inicialmente foi recolhido o tempo de aplicação do capply (material que pertence ao pneu) para futuramente verificar se será possível prever o tempo de aplicação de capply, consoante os metros de aplicação. Como não foi possível estabelecer uma correlação, não será exposta a análise até se chegar a essa conclusão. Depois fez-se o estudo das perturbações, com recurso à cronometragem, na maior das cinco divisões da fábrica - a construção. Por último fez-se a recolha dos tempos-máquina de vários tipos de construção de pneus em todos os módulos de construção de pneus (KM e PU), através do método filmagem. Conseguiu-se singularizar os tempos para cada máquina de forma a tornar, mais justo os prémios de produção atribuídos aos operadores, no curto prazo, e optimizar a produção e o tempo-padrão.
Resumo:
The TRMM-LBA field campaign was held during the austral summer of 1999 in southwestern Amazonia. Among the major objectives, was the identification and description of the diurnal variability of rainfall in the region, associated with the different rain producing weather systems that occurred during the January-February season. By using a network of 40 digital rain gauges implemented in the state of Rondônia, and together with observations and analyses of circulation and convection, it was possible to identify details of the diurnal cycle of rainfall and the associated rainfall mechanisms. Rainfall episodes were characterized by regimes of "low-level easterly" and "westerly" winds in the context of the large-scale circulation. The westerly regime is related to an enhanced South Atlantic Convergence Zone (SACZ) and an intense and/or wide Low Level Jet (LLJ) east of the Andes, which can extend eastward towards Rondônia, even though some westerly regime episodes also show a LLJ that remains close to the foothill of the Andes. The easterly regime is related to easterly propagating systems (e.g. squall-lines) with possible weakened or less frequent LLJs and a suppressed SACZ. Diurnal variability of rainfall during westerly surface wind regime shows a characteristic maximum at late afternoon followed by a relatively weaker second maximum at early evening (2100 Local Standard Time LST). The easterly regime composite shows an early morning maximum followed by an even stronger maximum in the afternoon.
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.
Resumo:
Observations at the Mauna Loa Observatory, Hawaii, established the systematic increase of anthropogenic CO2 in the atmosphere. For the same reasons that this site provides excellent globally averaged CO2 data, it may provide temperature data with global significance. Here, we examine hourly temperature records, averaged annually for 1977–2006, to determine linear trends as a function of time of day. For night-time data (22:00 to 06:00 LST (local standard time)) there is a near-uniform warming of 0.040 °C yr−1. During the day, the linear trend shows a slight cooling of −0.014 °C yr−1 at 12:00 LST (noon). Overall, at Mauna Loa Observatory, there is a mean warming trend of 0.021 °C yr−1. The dominance of night-time warming results in a relatively large annual decrease in the diurnal temperature range (DTR) of −0.050 °C yr−1 over the period 1977–2006. These trends are consistent with the observed increases in the concentrations of CO2 and its role as a greenhouse gas (demonstrated here by first-order radiative forcing calculations), and indicate the possible relevance of the Mauna Loa temperature measurements to global warming.
Resumo:
In the present study, we compared six different solubilization buffers and optimized two-dimensional electrophoresis (2-DE) conditions for human lymph node proteins. In addition, we developed a simple protocol for 2-D gel storage. Efficient solubilization was obtained with lysis buffers containing (a) 8 M urea, 4% CHAPS (3-[(3-cholamidopropyl) dimethylammonio]-1-propanesulfonate), 40 mM Tris base, 65 mM DTT (dithiothreitol) and 0.2% carrier ampholytes; (b) 5 M urea, 2 M thiourea, 2% CHAPS, 2% SB 3-10 (N-decyl-N,N-dimethyl-3-ammonio-1-propanesulfonate), 40 mM Tris base, 65 mM DTT and 0.2% carrier ampholytes or (c) 7 M urea, 2 M thiourea, 4% CHAPS, 65 mM DTT and 0.2% carrier ampholytes. The optimal protocol for isoelectric focusing (IEF) was accumulated voltage of 16,500 Vh and 0.6% DTT in the rehydration solution. In the experiments conducted for the sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), best results were obtained with a doubled concentration (50 mM Tris, 384 mM glycine, 0.2% SDS) of the SDS electrophoresis buffer in the cathodic reservoir as compared to the concentration in the anodic reservoir (25 mM Tris, 192 mM glycine, 0.1% SDS). Among the five protocols tested for gel storing, success was attained when the gels were stored in plastic bags with 50% glycerol. This is the first report describing the successful solubilization and 2D-electrophoresis of proteins from human lymph node tissue and a 2-D gel storage protocol for easy gel handling before mass spectrometry (MS) analysis.
Resumo:
Objective: To determine the accuracy of the variables related to the fixed-height stair-climbing test (SCT) using maximal oxygen uptake (V̇O 2 max) as the gold standard. Methods: The SCT was performed on a staircase consisting of 6 flights (72 steps; 12.16 m total height), with verbal encouragement, in 51 patients. Stair-climbing time was measured, the variables 'work' and 'power' also being calculated. The V̇O2 max was measured using ergospirometry according to the Balke protocol. We calculated the Pearson linear correlation (r), as well as the values of p, between the SCT variables and V̇O2 max. To determine accuracy, the V̇O 2 max cut-off point was set at 25 mL/kg/min, and individuals were classified as normal or altered. The cut-off points for the SCT variables were determined using the receiver operating characteristic curve. The Kappa statistic (k) was used in order to assess concordance. Results: The following values were obtained for the variable 'time': cut-off point = 40 s; mean = 41 ± 15.5 s; r = -0.707; p < 0.005; specificity = 89%; sensibility = 83%; accuracy = 86%; and k = 0.724. For 'power', the values obtained were as follows: cut-off point = 200 w; mean = 222.3 ± 95.2 w; r = 0.515; p < 0.005; specificity = 67%; sensibility= 75%; accuracy = 71%; and k = 0.414. Since the correlation between the variable 'work' and V̇O2 max was not significant, that variable was discarded. Conclusion: Of the SCT variables tested, using V̇O2 max as the gold standard, the variable 'time' was the most accurate.
Resumo:
Purpose: This study aimed to evaluate the effect of different storage periods in artificial saliva and thermal cycling on Knoop hardness of 8 commercial brands of resin denture teeth. Methods: Eigth different brands of resin denture teeth were evaluated (Artplus group, Biolux group, Biotone IPN group, Myerson group, SR Orthosit group, Trilux group, Trubyte Biotone group, and Vipi Dent Plus group). Twenty-four teeth of each brand had their occlusal surfaces ground flat and were embedded in autopolymerized acrylic resin. After polishing, the teeth were submitted to different conditions: (1) immersion in distilled water at 37 ± 2 °C for 48 ± 2. h (control); (2) storage in artificial saliva at 37 ± 2 °C for 15, 30 and 60 days, and (3) thermal cycling between 5 and 55 °C with 30-s dwell times for 5000 cycles. Knoop hardness test was performed after each condition. Data were analyzed with two-way ANOVA and Tukey's test (α= .05). Results: In general, SR Orthosit group presented the highest statistically significant Knoop hardness value while Myerson group exhibited the smallest statistically significant mean (P< .05) in the control period, after thermal cycling, and after all storage periods. The Knoop hardness means obtained before thermal cycling procedure (20.34 ± 4.45 KHN) were statistically higher than those reached after thermal cycling (19.77 ± 4.13 KHN). All brands of resin denture teeth were significantly softened after storage period in artificial saliva. Conclusion: Storage in saliva and thermal cycling significantly reduced the Knoop hardness of the resin denture teeth. SR Orthosit denture teeth showed the highest Knoop hardness values regardless the condition tested. © 2010 Japan Prosthodontic Society.
Resumo:
With recent advances in technology and research into drug delivery, the modernization of tests and greater emphasis on the predictability of therapeutic effect by means of in vitro tests, the dissolution test and the study of dissolution profiles are gaining more and more importance. Though introduced initially as a way of characterizing the release profile of poorly soluble drugs, dissolution tests are currently part of pharmacopoeial monographs on almost all the oral solid pharmaceutical forms. The objective of this study was to determine the dissolution profile (percent drug dissolved versus time) of the pioneer brand, generic and similar pharmaceutical capsules containing 500mg cephalexin. Three pharmaceutical brands (reference, generic and similar) were subjected to the dissolution test and in vitro dissolution profiles were recorded. From the results of the dissolution test, it was concluded that the samples met the acceptance criterion, as no difference was observed in the percentage of the drug dissolved in a standard time. The dissolution profile indicated that this medicine, in this pharmaceutical form, dissolves readily (85% of the drug dissolved in 15 minutes) and the curves showed great similarity, suggesting that the 3 brands are pharmaceutically equivalent.
Resumo:
The aims of this study are twofold. First, the study tries to provide the most reliable chronology possible for two critical sections by correlating the magnetic polarity stratigraphy measured in these sediments with a newly revised geomagnetic polarity time scale. Second, this study attempts to examine in detail the nature of seven short events not included in the shipboard standard time scale, but for which abundant magnetostratigraphic evidence was obtained during the Leg. Data presented here force some modifications of the shipboard interpretations of the magnetostratigraphy of Sites 845 and 844 on the basis of new data generated using discrete samples and from a greater appreciation of the magnetostratigraphic signature of Miocene-age short events. Those short events can be classified into two groups: those that probably reflect short, full-polarity intervals and those that more likely represent an interval of diminished geomagnetic intensity. Three of the seven events documented here correspond well with three subtle features, as seen in marine magnetic profiles, that have been newly included in the geomagnetic polarity time scale as short, full-polarity chrons. One of the seven events corresponds to a poorly defined feature of the marine magnetic record that has also been newly included in the geomagnetic polarity time scale, but which was considered of enigmatic origin. The three remaining events investigated here, although they have not been identified with features in the seafloor magnetic record, are suggested to be events of a similar nature, most likely times of anomalously low geomagnetic intensity. In addition to the Miocene magnetostratigraphic results given, several sets of averaged paleomagnetic inclinations are presented. Although these results clearly show the effects of a residual coring overprint, they demonstrate that paleomagnetic estimates of paleolatitudes can be made which are in good general agreement with ancient site positions calculated using hot spot-based plate reconstructions.