905 resultados para Fold Block-designs
Resumo:
The objective of the current study was to investigate the mechanism by which the corpus luteum (CL) of the monkey undergoes desensitization to luteinizing hormone following exposure to increasing concentration of human chorionic gonadotrophin (hCG) as it occurs in pregnancy. Female bonnet monkeys were injected (im) increasing doses of hCG or dghCG beginning from day 6 or 12 of the luteal phase for either 10 or 4 or 2 days. The day of oestrogen surge was considered as day '0' of luteal phase. Luteal cells obtained from CL of these animals were incubated with hCG (2 and 200 pg/ml) or dbcAMP (2.5, 25 and 100 mu M) for 3 h at 37 degrees C and progesterone secreted was estimated. Corpora lutea of normal cycling monkeys on day 10/16/22 of the luteal phase were used as controls, In addition the in vivo response to CG and deglycosylated hCG (dghCG) was assessed by determining serum steroid profiles following their administration. hCG (from 15-90 IU) but not dghCG (15-90 IU) treatment in vivo significantly (P < 0.05) elevated serum progesterone and oestradiol levels. Serum progesterone, however, could not be maintained at a elevated level by continuous treatment with hCG (from day 6-15), the progesterone level declining beyond day 13 of luteal phase. Administering low doses of hCG (15-90 IU/day) from day 6-9 or high doses (600 IU/day) on days 8 and 9 of the luteal phase resulted in significant increase (about 10-fold over corresponding control P < 0.005) in the ability of luteal cells to synthesize progesterone (incubated controls) in vitro. The luteal cells of the treated animals responded to dbcAMP (P < 0.05) but not to hCG added in vitro, The in vitro response of luteal cells to added hCG was inhibited by 0, 50 and 100% if the animals were injected with low (15-90 IU) or medium (100 IU) between day 6-9 of luteal phase and high (600 IU on day 8 and 9 of luteal phase) doses of dghCG respectively; such treatment had no effect on responsivity of the cells to dbcAMP, The luteal cell responsiveness to dbcAMP in vitro was also blocked if hCG was administered for 10 days beginning day 6 of the luteal phase. Though short term hCG treatment during late luteal phase (from days 12-15) had no effect on luteal function, 10 day treatment beginning day 12 of luteal phase resulted in regain of in vitro responsiveness to both hCG (P < 0.05) and dbcAMP (P < 0.05) suggesting that luteal rescue can occur even at this late stage. In conclusion, desensitization of the CL to hCG appears to be governed by the dose/period for which it is exposed to hCG/dghCG. That desensitization is due to receptor occupancy is brought out by the fact that (i) this can be achieved by giving a larger dose of hCG over a 2 day period instead of a lower dose of the hormone for a longer (4 to 10 days) period and (ii) the effect can largely be reproduced by using dghCG instead of hCG to block the receptor sites. It appears that to achieve desensitization to dbcAMP also it is necessary to expose the luteal cell to relatively high dose of hCG for more than 4 days.
Resumo:
A large part of the rural people of developing countries use traditional biomass stoves to meet their cooking and heating energy demands. These stoves possess very low thermal efficiency; besides, most of them cannot handle agricultural wastes. Thus, there is a need to develop an alternate cooking contrivance which is simple, efficient and can handle a range of biomass including agricultural wastes. In this reported work, a highly densified solid fuel block using a range of low cost agro residues has been developed to meet the cooking and heating needs. A strategy was adopted to determine the best suitable raw materials, which was optimized in terms of cost and performance. Several experiments were conducted using solid fuel block which was manufactured using various raw materials in different proportions; it was found that fuel block composed of 40% biomass, 40% charcoal powder, 15% binder and 5% oxidizer fulfilled the requirement. Based on this finding, fuel blocks of two different configurations viz. cylindrical shape with single and multi-holes (3, 6, 9 and 13) were constructed and its performance was evaluated. For instance, the 13 hole solid fuel block met the requirement of domestic cooking; the mean thermal power was 1.6 kWth with a burn time of 1.5 h. Furthermore, the maximum thermal efficiency recorded for this particular design was 58%. Whereas, the power level of single hole solid fuel block was found to be lower but adequate for barbecue cooking application.
Resumo:
The purpose of this study was to compare kinematics and kinetics during walking for healthy subjects using unstable shoes with different designs. Ten subjects participated in this study, and foot biomechanical data during walking were quantified using motion analysis system and a force plate. Data were collected for unstable shoes condition after accommodation period of one week. With soft material added in the heel region, the peak impact force was effectively reduced when compared among similar shapes. In addition, the soft material added in the rocker bottom showed more to be in dorsiflexed position during the initial stance. The shoe with three rocker curves design reduced the contact area in the heel strike, which may result in increasing human body forward speed. Further studies shall be carried out after adapting to long periods of wearing unstable shoes.
Resumo:
Sampling strategies are developed based on the idea of ranked set sampling (RSS) to increase efficiency and therefore to reduce the cost of sampling in fishery research. The RSS incorporates information on concomitant variables that are correlated with the variable of interest in the selection of samples. For example, estimating a monitoring survey abundance index would be more efficient if the sampling sites were selected based on the information from previous surveys or catch rates of the fishery. We use two practical fishery examples to demonstrate the approach: site selection for a fishery-independent monitoring survey in the Australian northern prawn fishery (NPF) and fish age prediction by simple linear regression modelling a short-lived tropical clupeoid. The relative efficiencies of the new designs were derived analytically and compared with the traditional simple random sampling (SRS). Optimal sampling schemes were measured by different optimality criteria. For the NPF monitoring survey, the efficiency in terms of variance or mean squared errors of the estimated mean abundance index ranged from 114 to 199% compared with the SRS. In the case of a fish ageing study for Tenualosa ilisha in Bangladesh, the efficiency of age prediction from fish body weight reached 140%.
Resumo:
So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.
Resumo:
The goal of this article is to provide a new design framework and its corresponding estimation for phase I trials. Existing phase I designs assign each subject to one dose level based on responses from previous subjects. Yet it is possible that subjects with neither toxicity nor efficacy responses can be treated at higher dose levels, and their subsequent responses to higher doses will provide more information. In addition, for some trials, it might be possible to obtain multiple responses (repeated measures) from a subject at different dose levels. In this article, a nonparametric estimation method is developed for such studies. We also explore how the designs of multiple doses per subject can be implemented to improve design efficiency. The gain of efficiency from "single dose per subject" to "multiple doses per subject" is evaluated for several scenarios. Our numerical study shows that using "multiple doses per subject" and the proposed estimation method together increases the efficiency substantially.
Resumo:
A decision-theoretic framework is proposed for designing sequential dose-finding trials with multiple outcomes. The optimal strategy is solvable theoretically via backward induction. However, for dose-finding studies involving k doses, the computational complexity is the same as the bandit problem with k-dependent arms, which is computationally prohibitive. We therefore provide two computationally compromised strategies, which is of practical interest as the computational complexity is greatly reduced: one is closely related to the continual reassessment method (CRM), and the other improves CRM and approximates to the optimal strategy better. In particular, we present the framework for phase I/II trials with multiple outcomes. Applications to a pediatric HIV trial and a cancer chemotherapy trial are given to illustrate the proposed approach. Simulation results for the two trials show that the computationally compromised strategy can perform well and appear to be ethical for allocating patients. The proposed framework can provide better approximation to the optimal strategy if more extensive computing is available.
Resumo:
The monohydrate of the protected amino-terminal pentapeptide of suzukacillin, t-butoxycarbonyl--aminoisobutyryl-L-prolyl-L-valyl--aminoisobutyryl-L-valine methyl ester, C29H51N5O8, crystallizes in the orthorhombic space group P212121 with a= 10.192, b= 10.440, c= 32.959 Å, and Z= 4. The structure has been solved by direct methods and refined to an R value of 0.101 for 1 827 observed reflections. The molecule exists as a four-fold helix with a pitch of 5.58 Å. The helix is stabilised by N–H O hydrogen bonds, two of the 51 type (corresponding to the -helix) and the third of the 41 type (310 helix). The carbonyl oxygen of the amino-protecting group accepts two hydrogen bonds, one each from the amide NH groups of the third (41) and fourth (51) residues. The remaining 51 hydrogen bond is between the two terminal residues. The lone water molecule in the structure is hydrogen bonded to carbonyl oxygens of the prolyl residue in one molecule and the non-terminal valyl residue in a symmetry-related molecule.
Resumo:
Several articles in this journal have studied optimal designs for testing a series of treatments to identify promising ones for further study. These designs formulate testing as an ongoing process until a promising treatment is identified. This formulation is considered to be more realistic but substantially increases the computational complexity. In this article, we show that these new designs, which control the error rates for a series of treatments, can be reformulated as conventional designs that control the error rates for each individual treatment. This reformulation leads to a more meaningful interpretation of the error rates and hence easier specification of the error rates in practice. The reformulation also allows us to use conventional designs from published tables or standard computer programs to design trials for a series of treatments. We illustrate these using a study in soft tissue sarcoma.
Resumo:
The purpose of a phase I trial in cancer is to determine the level (dose) of the treatment under study that has an acceptable level of adverse effects. Although substantial progress has recently been made in this area using parametric approaches, the method that is widely used is based on treating small cohorts of patients at escalating doses until the frequency of toxicities seen at a dose exceeds a predefined tolerable toxicity rate. This method is popular because of its simplicity and freedom from parametric assumptions. In this payer, we consider cases in which it is undesirable to assume a parametric dose-toxicity relationship. We propose a simple model-free approach by modifying the method that is in common use. The approach assumes toxicity is nondecreasing with dose and fits an isotonic regression to accumulated data. At any point in a trial, the dose given is that with estimated toxicity deemed closest to the maximum tolerable toxicity. Simulations indicate that this approach performs substantially better than the commonly used method and it compares favorably with other phase I designs.
Resumo:
Space-time codes from complex orthogonal designs (CODs) with no zero entries offer low Peak to Average Power Ratio (PAPR) and avoid the problem of switching off antennas. But square CODs for 2(a) antennas with a + 1. complex variables, with no zero entries were discovered only for a <= 3 and if a + 1 = 2(k), for k >= 4. In this paper, a method of obtaining no zero entry (NZE) square designs, called Complex Partial-Orthogonal Designs (CPODs), for 2(a+1) antennas whenever a certain type of NZE code exists for 2(a) antennas is presented. Then, starting from a so constructed NZE CPOD for n = 2(a+1) antennas, a construction procedure is given to obtain NZE CPODs for 2n antennas, successively. Compared to the CODs, CPODs have slightly more ML decoding complexity for rectangular QAM constellations and the same ML decoding complexity for other complex constellations. Using the recently constructed NZE CODs for 8 antennas our method leads to NZE CPODs for 16 antennas. The class of CPODs do not offer full-diversity for all complex constellations. For the NZE CPODs presented in the paper, conditions on the signal sets which will guarantee full-diversity are identified. Simulation results show that bit error performance of our codes is same as that of the CODs under average power constraint and superior to CODs under peak power constraint.
Resumo:
Tämä tutkielma on osa Helsingin yliopiston rahoittamaa HY-talk -tutkimusprojektia, jonka tavoite on vankentaa puheviestinnän, erityisesti vieraiden kielten suullisen taidon opetusta ja arviointia yleissivistävässä koulutuksessa ja korkeakouluasteella. Tämän tutkielman tavoite on selvittää millaisia korjauksia englantia vieraana kielenä puhuvat ihmiset tekevät puheeseensa ja tutkia itsekorjauksen ja sujuvuuden välistä suhdetta. Korjausjäsennystä ja itsekorjausta on aiemmin tutkittu sekä keskustelunanalyysin että psykolingvistiikan aloilla, ja vaikka tämä tutkielma onkin lähempänä aiempaa keskustelunanalyyttistä kuin psykolingvististä tutkimusta, siinä hyödynnetään molempia suuntauksia. Itsekorjausta on yleisesti pidetty merkkinä erityisesti ei-natiivien kielenpuhujien sujuvuuden puutteesta. Tämän tutkielman tarkoitus on selvittää, kuinka läheisesti itsekorjaus todella liittyy sujuvuuteen tai sen puutteeseen. Tutkielman materiaali koostuu HY-talk -projektia varten kerätyistä puhenäytteistä ja niiden pohjalta tehdyistä taitotasoarvioinneista. Puhenäytteet kerättiin vuonna 2007 projektia varten järjestettyjen puhekielen testaustilanteiden yhteydessä kolmessa eteläsuomalaisessa koulussa. Koska projektin tavoitteena on tutkia ja parantaa kielten suullisen taidon arviointia, projektissa mukana olleet kieliammattilaiset arvioivat puhujien taitotasot projektia varten (Eurooppalaisen Viitekehyksen taitotasokuvainten pohjalta) koottujen arviointiasteikoiden perusteella, ja nämä arvioinnit tallennettiin osaksi projektin materiaalia. Tutkielmassa analysoidaan itsekorjauksia aiemman psykolingvistisen tutkimuksen pohjalta kootun korjaustyyppiluokituksen sekä tätä tutkielmaa varten luodun korjausten oikeellisuutta vertailevan luokituksen avulla. Lisäksi siinä vertaillaan kahden korkeamman ja kahden matalamman taitotasoarvioinnin saaneen puhujan itsekorjauksia. Tulokset osoittavat, että ei-natiivien puheessa esiintyy monenlaisia eri korjaustyyppejä, ja että yleisimpiä korjauksia ovat alkuperäisen lausuman toistot. Yleisiä ovat myös korjaukset, joissa puhuja korjaa virheen tai keskeyttää puheensa ja aloittaa kokonaan uuden lausuman. Lisäksi tuloksista käy ilmi, ettei suurin osa korjauksista todennäköisesti johdu puhujien sujuvuuden puutteesta. Yleisimmät korjaustyypit voivat johtua suurimmaksi osaksi yksilön puhetyylistä, siitä, että puhuja hakee jotain tiettyä sanaa tai ilmausta mielessään tai siitä, että puhuja korjaa puheessaan huomaamansa kieliopillisen, sanastollisen tai äänteellisen virheen. Vertailu korkeammalle ja matalammalle taitotasolle arvioitujen puhujien välillä osoittaa selkeimmin, ettei suurin osa itsekorjauksista ole yhteydessä puhujan sujuvuuteen. Vertailusta käy ilmi, ettei pelkkä itsekorjausten määrä kerro kuinka sujuvasti puhuja käyttää kieltä, sillä toinen korkeammalle taitotasolle arvioiduista puhujista korjaa puhettaan lähes yhtä monesti kuin matalammalle tasolle arvioidut puhujat. Lisäksi korjausten oikeellisuutta vertailevan luokituksen tulokset viittaavat siihen, etteivät niin korkeammalle kuin matalammallekaan tasolle arvioidut puhujat useimmiten korjaa puhettaan siksi, etteivät pystyisi ilmaisemaan viestiään oikein ja ymmärrettävästi.
Resumo:
A retail precinct along James Street in Brisbane is made more permeable by Richards and Spence.
Resumo:
It is well known that space-time block codes (STBCs) obtained from orthogonal designs (ODs) are single-symbol decodable (SSD) and from quasi-orthogonal designs (QODs) are double-symbol decodable (DSD). However, there are SSD codes that are not obtainable from ODs and DSD codes that are not obtainable from QODs. In this paper, a method of constructing g-symbol decodable (g-SD) STBCs using representations of Clifford algebras are presented which when specialized to g = 1, 2 gives SSD and DSD codes, respectively. For the number of transmit antennas 2(a) the rate (in complex symbols per channel use) of the g-SD codes presented in this paper is a+1-g/2(a-9). The maximum rate of the DSD STBCs from QODs reported in the literature is a/2(a-1) which is smaller than the rate a-1/2(a-2) of the DSD codes of this paper, for 2(a) transmit antennas. In particular, the reported DSD codes for 8 and 16 transmit antennas offer rates 1 and 3/4, respectively, whereas the known STBCs from QODs offer only 3/4 and 1/2, respectively. The construction of this paper is applicable for any number of transmit antennas. The diversity sum and diversity product of the new DSD codes are studied. It is shown that the diversity sum is larger than that of all known QODs and hence the new codes perform better than the comparable QODs at low signal-to-noise ratios (SNRs) for identical spectral efficiency. Simulation results for DSD codes at variousspectral efficiencies are provided.