Synthetic Animal: Trends in Animal Breeding and Genetics

Synthetic biology is an interdisciplinary branch of biology and engineering. The subject combines various disciplines from within these domains, such as biotechnology, evolutionary biology, molecular biology, systems biology, biophysics, computer engineering, and genetic engineering. Synthetic biology aims to understand whole biological systems working as a unit, rather than investigating their individual components and design new genome. Signifi cant advances have been made using systems biology and synthetic biology approaches, especially in the fi eld of bacterial and eukaryotic cells. Similarly, progress is being made with ‘synthetic approaches’ in genetics and animal sciences, providing exciting opportunities to modulate, genome design and fi nally synthesis animal for favorite traits. Review Article Synthetic Animal: Trends in Animal Breeding and Genetics Abolfazl Bahrami1* and Ali Najafi 2 1Department of Animal Science, University College of Agriculture and Natural Resources, University of Tehran, Karaj, Iran 2Molecular Biology Research Center, Baqiyatallah University of Medical Sciences, Tehran, Iran *Address for Correspondence: A Bahrami, Department of Animal Science, Tehran University, Karaj, I.R. Iran, Tel/Fax: +98 9199300065; Email: a.bahrami@ut.ac.ir Submitted: 31 December 2018 Approved: 10 January 2019 Published: 11 January 2019 Copyright: © 2018 Bahrami A, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited


Animal breeding
In 1859, Charles Darwin published his book 'On the origin of species', based on the indings that he collected during his voyage on 'the Beagle' [1]. He discovered the forces of natural selection. He also concluded that the individuals that it best in their environment have the highest chance to survive and reproduce: they are the ittest. His conclusion was that the difference in food source, predators present, etc. between the islands had made develop differently over very many generations. Still, Darwin did not know about the basic laws of inheritance. It was the monk Gregor Mendel (https:// history.nih.gov/exhibits/nirenberg/HS1_mendel.htm), who published the results of his studies of genetic inheritance in garden peas (https://history.nih.gov/exhibits/ nirenberg/HS1_mendel.htm). He showed that genetic material is inherited from both parents, independently of each other. And that each (diploid) individual thus carries 2 copies of the same gene, of which only 1 is passed on to their offspring. Which one is a result of chance (independent assortment). He also showed that these gene copies (alleles) can be dominant (only 1 copy determines the expression of the gene), recessive (2 copies are required for expression), or additive (a copy of both alleles result in an expression that is intermediate to that of having 2 copies of either of the alleles). These indings had no immediate impact on animal breeding and were not recognized as important until 1900.
Most of the animal breeding and genetics theory we are still using today, was invented in the irst half of the 20-th century. The statistician R. A. Fisher showed that the diversity of expression of a trait could depend on the involvement of a large number of so-called Mendelian factors (genes) [2]. Fisher, together with Sewall Wright and J.B.S. Haldane were the founders of theoretical population genetics [3,4]. Thomas Hunt Morgan and coworkers connected the chromosome theory of inheritance to the work by Mendel and created a theory where chromosomes of cells were believed to carry the actual hereditary material [5]. Jay L. Lush who is known as the modern father of animal breeding and genetics [6]. He advocated that instead of subjective appearance, animal breeding should be based on a combination of quantitative statistics and genetic information. The estimated breeding value (EBV) was only developed later by the statistician C. R. Henderson [7]. The estimated breeding value made it possible to rank the animals according to their estimated genetic potential (the EBV), which resulted in more accurate selection results and thus a faster genetic improvement across generations. Henderson further improved the accuracy of the estimated breeding value by deriving the best linear unbiased prediction (BLUP) of the EBV in 1950, but the term was only used since 1960. He also suggested to integrate the full pedigree of the population to include genetic relationships between individuals.

Candidate gene
The candidate gene approach to conduct genetic association studies focuses on associations between genetic variation within disease states or phenotypes and genes of interest. Candidate genes are often selected for study based on a priori knowledge of the gene's biological functional that affected the traits or disease. Suitable candidate genes are generally selected based on known physiological, biological, or functional relevance to the traits in question. This approach is limited by its reliance on existing knowledge about known or theoretical biology of trait. However, more recently developed molecular tools are allowing insight into trait and disease mechanisms and pinpointing potential regions of interest in the genome. Many studies have used candidate genes as part of a multi-disciplinary approach to examining a phenotype or trait [8,9].

Genomic selection
Current great minds that have developed a way to incorporate large scale DNA information that has become available in animal model (BLUP) theory to estimate the so-called genomic breeding values are Theo Meuwissen and Mike Goddard [10]. Until 1953, scientists used statistics and presumed mechanisms to make predictions about inheritance. Nobody knew what exactly was the mechanism behind it. But in 1953, was discovered the double helix structure of DNA. In the beginning studying the DNA was very labor intensive and thus also very costly. Nowadays robots can perform large scale genotyping e.g. of more than 60,000 genetic markers on thousands of individuals within very limited amounts of time. A genetic marker can be considered a kind of ' lag' on the genome.
Main idea behind genomic selection is the association between the DNA make-up and performance of animals can add to the estimated breeding value, or even replace it. You can select animals already at very early age and you don't have to wait until they become adult Because you don't have to wait until the phenotype can be measured on the animals anymore as you have the associated DNA information. You can also use this for traits that are dif icult to measure such as disease related traits. It would be highly desirable if you would only need to infect a inite number of animals and evaluate their response to the infection, link that to their DNA, and use that estimated link to predict the sensitivity of other animals to that disease based on their DNA, without having to infect them. Thus genomic selection refers to the use of genome-wide genetic markers to predict the breeding value of selection candidates [11]. This method relies on linkage disequilibrium between the markers and the polymorphisms that afford variation in important traits. Consequently, a linear prediction equation can predict the cumulative effect of many causal variants on the breeding value of the animal. Because it is possible to genotype individuals for 100000s of SNPs at a reasonable cost, the markers used in genomic selection are most commonly SNPs [12]. The equation that predicts breeding value from SNP genotypes must be estimated from a sample of animals, known as the reference population, that have been measured for the traits and genotyped for the SNPs. This prediction equation can be used to predict breeding values for selection candidates based on their genotypes alone. The candidates are ranked on these estimated breeding values, and the best ones are selected to breed the next generation [13].

The advantage and challenge of genomic selection
The advantage of genomic selection rather than traditional selection is that animals can be selected accurately early in life and for traits that are dif icult or expensive to evaluate: disease resistance, fertility, feed conversion and methane emissions are prime examples. In dairy cattle, dairy bulls are traditionally selected pursuant progeny testing, because genetic merit for milk production of a bull can only be accurately measured through the milk production of his daughters. Progeny testing results in accurate selection, but with a generation interval of 5 years or longer. With genomic selection, the generation interval can be reduced to 2 years, potentially resulting in a 60-120% increase in the rate of genetic gain [14,15]. However, genomic prediction across breeds has been largely unsuccessful to date, with prediction equations derived in one breed giving low accuracies in other breeds [16]. This is likely because of differences in linkage disequilibrium phases between SNPs and causative mutations. So maybe whole-genome sequence data may improve accuracy across breed prediction. The major challenge in applying genomic selection to some traits are important in the future, assembling large enough reference populations to make accurate predictions, because thousands to tens of thousands of phenotyped individuals are required [17,18].

Whole-genome sequencing
Although genomic estimated breeding values are now widely used as the foundation for choice of animals, there are some constraints of the current technology. It has become clear that much of the accuracy of genomic breeding values (based on 50000 DNA markers) in fact derives from prediction of the effect of large chromosome segments that segregate within closely related animals [19]. In this condition, the accuracy of the prediction equation will rapidly decay over generations as large chromosome segments break up because of recombination. Within breeds, effective population sizes are generally <200 and, consequently, animals within a breed have recent common ancestors and so apportion large chromosome segments. Using genomic predictions from wholegenome sequence data, may overcome some of these problems. Given that the causative mutations are present in the sequence data, the issue of decay in associations between causative mutations and SNP, which results in the decline in accuracy over time, may be overcome. Although this has been demonstrated in simulated data, in practice, to gain this will need a carefully designed reference population [10]. This requires a population in which the linkage disequilibrium between causative mutations and other variants is as limited as possible: if the extent of linkage disequilibrium is too large, the genomic prediction algorithms will distribute the effect of the causative mutation over variants across large chromosome segments, leading to the problem described above. If full genome sequence data could be used in genomic predictions rather than SNP arrays, because the causal mutations are in the data set, the accuracy is no longer bounded by linkage disequilibrium between SNP and causative mutations. Though the cost of genome resequencing has declined dramatically, it is too expensive to resequence the tens of thousands of individuals that would be required to estimate accurately the small effects of the large number of mutations affecting typical complex traits yet. In silico resequencing of large numbers of animals with speci ic phenotypes and the accumulation of these data across breeds would then enable highly accurate genomic predictions from whole-genome resequencing data. Breathtaking development is the sequencing of more dairy sires as part of the 1000 Bull Genomes Project, which is now underway [20]. Although using sequence data in genomic predictions is absorbing for the reasons described above, an important challenge will be the large number of SNPs and other variant effects to be estimated, with a still-limited number of records. The numbers of variants are likely to be in the tens of millions. One strategy to deal with this will be to use biological information such as "omics" data.

Results of animal breeding and genetics
Selective animal breeding already has about 300 years of history. A lot has been achieved since. Obvious results have been achieved in the ield of cattle breeding. For example the Results obtained in cattle breeding: The increase until 1970 is much less steep than that from 1990 onwards. Reasons for this are many, but important ones are very strong increase in use of AI so that stronger selection in bulls was possible, introduction of more accurate techniques for estimating breeding values, introduction of automatic milking and the free stall instead of the tied stall, and better quality nutrition. The increase in phenotypic milk production in the period 1995 -2013 is very similar to the estimated increase in genetic potential for milk production: approximately 1500 kg. This indicates that systematic improvements in the environment such as automatic milking, loose housing, and diet quality has similar effects on all cows.

Transgenic animal
Transgenic animals carry on embody one of the most exciting research tools in the biological sciences. Transgenic animals show unique models that are custom tailored to address speci ic biological questions. Hence, the ability to introduce functional genes into animals provides a very powerful tool for analyzing complex biological systems and processes. Gene transfer is of particular value in those animal species, where long life cycles reduce the value of classical breeding practices for rapid genetic modi ication. In general, a Transgenic Organism (TO) is any organism whose genetic material has been modi ied using genetic engineering techniques. This is an organism whose genetic makeup has been modi ied by the addition of genetic material from an unrelated organism. Transgenic involves the insertion, or deletion and mutation of genes. Inserted genes usually come from a different species in a form of horizontal gene-transfer. In nature this can happen when exogenous DNA interpenetrates the membrane of cell for any reason. This can be done arti icially by physically inserting the extra DNA into the nucleus of the intended host with a very small syringe, attaching the genes to a virus, iring small particles from a gene gun and using electroporation [21,22]. Other methods exploit natural forms of gene transfer, such as the ability of lentiviruses to transfer genes to animal cells and the ability of Agrobacterium to transfer genetic material to plants [23,24]. Various development in genetics permitted humans to change the DNA and genes of organisms. Jackson et al. (1972) created the irst recombinant DNA molecule when he combined DNA from a monkey virus with that of the lambda virus. 25 The irst transgenic livestock were produced and the irst animal to synthesise transgenic proteins in their milk were mice, engineered to produce human tissue plasminogen activator. [26][27][28] The irst transgenic animal to be approved for food use was AquAdvantage salmon. The salmon were transformed with a growth hormone-regulating gene from a Paci ic Chinook salmon and a promoter from an ocean pout enabling it to grow year-round instead of only during spring and summer [29]. TOs are used in production of drugs agriculture and experimental medicine with developing uses in conservation [30]. The irst transgenic animal was created by injecting DNA into mouse embryos then implanting the embryos in female mice [31]. Transgenic animals currently being expanded can be placed into different broad classes based on the intended goal of the transgenic including to research human diseases, to produce products intended for human therapeutic use, to produce industrial or consumer products, to enhance production or food quality traits, to enrich or enhance the animals' interactions with humans and to improve animal health. Dolly was a sheep and the irst animal to be cloned from an adult somatic cell. Genetically modi ied animals are used as experimental models to test in biomedical research and for performing phenotypic [32]. Transgenic animals are becoming more vital to the discovery and development of treatments for many diseases. By changing the DNA or transferring DNA to an animal, we can create proteins that may be used in medical cure. Stable expressions of human proteins have been created in many animals, including pigs, sheep and rats. For example Human-alpha-1-antitrypsin, which has been tested in sheep and is used in treating humans with this de iciency and transgenic pigs with human-histo-compatibility have been studied in the hopes that the organs will be suitable for transplant with less chances of rejection [33]. Scientists announced that they had successfully transferred a gene into a primate species and made a line of breeding genetically modi ied primates for the irst time [34]. Chinese scientists created dairy cows with genes from human beings to produce milk that would be the same as human breast milk [35]. Researchers from New Zealand also developed a transgenic cow that produced allergy-free milk [36].

DNA microinjection:
The favorable gene is injected in the pronucleus of a reproductive cell using a glass needle. The retouched cell is cultured in vitro to expand to a speci ic embryonic phase, is then transferred to a recipient female. DNA microinjection does not have a high ef iciency, even if the new DNA is combined in the genome, the new traits will not appear in their offspring, if it is not accepted by the germ-line [37].

Retrovirus-mediated gene transfer:
A retrovirus is a virus that moves its genetic material in the form of RNA instead of DNA. Retroviruses are used as vectors to transfer genetic material into the host cell. The result is a chimera, an organism include parts or tissues of diverse genetic constitution [37].

Restriction enzyme mediated integration:
Restriction enzyme mediated integration (REMI) is a technique for combining DNA into the genome sites that have been created by the same restriction enzyme used for the DNA linearisation. The plasmid combine occurs at the corresponding sites in the genome, often by regenerating the diagnosis sites by same the restriction enzyme used for plasmid linearization [37].

Stem cell transgenesis
Multipotent: Multipotent stem cells can only differentiate into a inite number of therapeutically bene icial cell types, however their safety and relative lack of complication to us have resulted in the extensive majority of personalized cellular therapeutics involving multipotent stem cells [38].
Pluripotent: Transgenic vectors can be hand over randomly or targeted to a particular genomic location, such as a safe harbor. Scientists have done research and technology improvement to provide the tools necessary to allow effective and safe pluripotent stem cell (PSC) transgenesis [39,41].

Totipotent:
The administered gene is inserted into totipotent stem cells, cells which can expand into any specialized cell. Cells containing the desirable DNA are combined into the host's embryo, resulting in a chimeric animal. Unlike the other two methods which need to live transgenic offspring for testing, embryonic cell transfer can be examined at the cell stage [42,43].

Genome editing
Genome editing with engineered nucleases (GEEN) is a kind of genetic engineering in which DNA is replaced, inserted or deleted in the genome of an organism by using engineered nucleases. These nucleases create site-speci ic double-strand breaks (DSBs) at desirable locations in the genome. The induced double-strand breaks are repaired through homologous recombination (HR) or nonhomologous end-joining (NHEJ), resulting in targeted mutations. Currently, there are four families of engineered nucleases that being used: zinc inger nucleases (ZFNs), meganucleases, the CRISPR-Cas system and transcription activator-like effector-based nucleases (TALEN) [44]. Among the most important requirements of reverse genetic analysis is the ability to manipulate the DNA sequence of the target organism. This can be arrived by: • Recombination based methods that use the natural ability of cells to swap DNA between an exogenous DNA and its own genetic information.
• Site-directed mutagenesis hiring either polymerase chain reaction (PCR) or phage-mediated methods and oligonucleotides containing the desired mutation [45].
• Drawbacks of these approaches • Phage and PCR -mediated approaches are less successful in more complicated organisms such as mammals, where delivery becomes more dif icult.
• They also need to stringent choice steps and thus the addition of selectionspeci ic sequences, along with those incorporated into the DNA [46].

Double stranded breaks
Basic to the use of nucleases in genome editing is the meaning of DNA double stranded break repair mechanisms. The known DNA double stranded break repair pathways that are functional in all organisms are homology directed repair (HDR) and the non-homologous end joining (NHEJ).

Site-specifi c double stranded breaks
Development of a DNA double stranded break in DNA should not be a challenging task as the used restriction enzymes are capable of doing so. However, if genomic DNA is treated with a speci ic restriction endonuclease many DNA double stranded breaks will be made. This is a result of the fact that most restriction enzymes identify a few base pairs on the DNA as their target and very likely that speci ic base pair composition will be found in many locations across the genome. To overcome this challenge and make site-speci ic DNA double stranded break, three different classes of nucleases have been discovered. These are the transcription-activator like effector nucleases (TALEN), Zinc inger nucleases (ZFNs) and meganucleases. Below is a brief overview of these enzymes.
Meganucleases have the unique feature of having long recognition sequences thus creating them naturally is very speci ic [47]. This can be exploited to make site-speci ic DNA double stranded break in genome editing; however, the challenge is that known meganucleases are insuf icient to cover all possible target sequences. To dominate this challenge, mutagenesis and high throughput screening methods have been used to make meganuclease variants that identify unique sequences Others have been able to fuse various meganucleases and make hybrid enzymes that identify a new sequence [48,49].
Meganucleases have the pro it of causing less toxicity in cells than methods such as zinc inger nucleases, likely because of more stringent DNA sequence recognition; Although, the manufacturing of sequence-speci ic enzymes for all possible sequences is time consuming and costly, as one is not pro iting from combinatorial possibilities that methods such as zinc inger nucleases and the transcription-activator like effector nucleases-based fusions utilize [47]. In spite of meganucleases, the concept behind zinc inger and the transcription-activator like effector nucleases technology is based on a non-speci ic DNA cutting enzyme, which can then be linked to special DNA sequence recognizing peptides such as transcription activator-like effectors (TALEs) and zinc ingers. The most important for this was to ind an endonuclease whose DNA recognition site and cleaving site were separate from each other, a location that is not common among restriction enzymes [50]. A restriction enzyme with such properties is FokI. Additionally FokI has the advantage of need dimerization to have nuclease activity and this means the speci icity increases dramatically as each nuclease partner would identify a unique DNA sequence. To increase this effect, FokI nucleases have been modi ied that can only function as heterodimers and have increased catalytic activity [51]. Though the nuclease portions of both zinc inger and the transcriptionactivator like effector nucleases constructs have the same properties, the difference between these engineered nucleases is in their DNA recognition peptide. Zinc inger nucleases rely on Cys2-His2 zinc ingers and the transcription-activator like effector nucleases constructs on TALEs. Both of these DNA recognizing peptide domains have the characteristic that they are found in compositions in their proteins. Cys2-His2 Zinc ingers typically happen in repeats that are 3 bp apart and are found in diverse combinations in a variety of nucleic acid interacting proteins such as transcription factors [47]. One recent improvement integrates the DNA binding speci icity of transcription activator-like effectors with the nuclease speci icity of meganucleases; these "megaTALs" are it with all current technologies and may represent improvements on existing methods [52].

Systems biology
Systems biology is the mathematical and computational modeling of complicated biological systems. An appearing engineering approach applied to biological research, systems biology is a biology-based interdisciplinary ield of survey that focuses on complicated interactions within biological systems, using a comprehensive approach to biological research. Exclusively from year 2000 onwards, the concept has been used widely in the biosciences in a variety of grounds. One of the developmental aims of systems biology is to model and discover emergent attributes, properties of organisms, tissues and cells functioning as a system whose theoretical description is only possible using techniques which fall under the shrink of systems biology. These typically involve cell signaling or metabolic networks [53,54]. Different aspects of system biology: • As a ield of study of the interactions between the components of biological systems, and how these interactions increase to the behavior and function of that system [55].
• As a series of usable protocols used for doing research, namely a cycle composed of theory, experimental validation, analytic or computational modelling to offer speci ic testable hypotheses about a biological system and then using the recently acquired quantitative description of cell processes or cells to re ine the computational model [56]. Since the purpose is a model of the interactions in a system, the experimental techniques that most suit systems biology are those that are system-wide and effort to be as complete as possible. Thus, transcriptomics, proteomics, metabolomics and high-throughput techniques are used to gather quantitative data for the construction and validation of models [57].
• As the usage of dynamical systems theory to molecular biology. Indeed, the concentrate on the dynamics of the studied systems is the principal conceptual difference between bioinformatics and systems biology. Ludwig von Bertalanffy who can be seen as one of the pioneers of systems biology with his systems theory [58]. One of the irst numerical simulations in cell biology was published by neurophysiologists Alan Lloyd Hodgkin and Andrew Fielding Huxley, who constructed a mathematical model that explained the function potential propagating along the axon of a neuronal cell [59]. Denis Noble (1960) expanded the irst computer model of the heart pacemaker [60].
The formal study of systems biology, as a distinguished discipline, was started by systems theorist Mihajlo Mesarovic entitled "Systems Theory and Biology" [61]. The successes of molecular biology throughout the 1980s, coupled with doubt toward theoretical biology, that then promised more than it achieved, effected the quantitative modelling of biological processes to become a slightly minor ield [62]. However the birth of functional genomics in the 1990s meant that large quantities of high quality data became available making more realistic models possible. Several articles on systems genetics, systems medicine and systems biological engineering were published [63][64][65]. The group of Masaru Tomita published the irst quantitative model of the metabolism of a whole cell [66]. Systems biology emerged as a movement in its own right after Institutes of Systems Biology were established in Seattle and Tokyo, spurred on by the completion of various genome projects, the large increase in data from the omics and the accompanying advances in bioinformatics and high-throughput experiments. In 2002 and 2003, the some foundations and institutions put forward a grand challenge for systems biology to construct a mathematical model of the whole cell. In 2006, because of a shortage of people in systems biology several doctoral programs in systems biology have been established in many parts of the world. The irst whole-cell model of Mycoplasma Genitalium was achieved in 2012. The wholecell model is able to predict viability of Mycoplasma Genitalium cells in response to mutations [67]. Pursuant to the explanation of systems biology as the ability to gain, integrate and analyze complicated data sets from multiple experimental sources using interdisciplinary tools and databases (Tables 1,2), some typical technology platforms are: Genomics, Transcriptomics, Epigenomics or Epigenetics, Translatomics or Proteomics, Metabolomics, Phenomics, Interferomics, Glycomics, Lipidomics, Interactomics, NeuroElectroDynamics, Fluxomics, Biomics, Semiomics and Cancer Systems Biology. The systems biology approach often involves the expansion of mechanistic models, such as the reconstruction of dynamic systems from the quantitative properties of their elementary building blocks. Because of the large number of parameters, constraints and variables in cellular networks, computational and numerical techniques are often used for example lux balance analysis (FBA) [68].

Synthetic biology
Synthetic biology is an interdisciplinary ield of engineering and biology. The subject incorporates various disciplines from within these domains, such as evolutionary biology, biotechnology, molecular biology, biophysics, computer engineering, genetic engineering and systems biology. De inition, by Jan Staman, described it as "a new emerging scienti ic ield where ICT, biotechnology and nanotechnology meet and strengthen each other" (http://www.synbiosafe.eu/uploads). Synthetic biology description is designing and constructing biological modules biological systems, and biological machines for useful purposes [69]. Progress is being made with synthetic approaches in genetics and animal sciences, providing exciting opportunities to modulate, genome design and inally synthesis animal with favorite traits. Thus in this paper we have explained and illustrated applications of synthetic biology specially to animal breeding and genetics. A Table 2: This table contains a collection of a list of tools, databases and methods for system biology.

Tools, databases and methods for system biology Discretion
SBML SBML is a software-independent language for describing models common to research in many areas of computational biology, including cell signaling pathways, metabolic pathways, gene regulation, and others  [70,71]. Studies in synthetic biology can be subdivided into broad assortments according to the approach they take to the problem at hand: biomolecular engineering, standardization of biological parts, genome engineering and genome design [72].
Because of the complication of natural biological systems, it would be simpler to rebuild the systems of interest from the ground up; until provide engineered surrogates that are easier to understand, control and manipulate [73]. Tables 1 and 2 show the collection of a list of tools, databases and methods for synthetic biology.

The essential gene
Essential genes are those genes of an organism that are thought to be critical for its survival. Although, being essential is dependent on the conditions in which an organism lives. For example, a gene required to digest starch is only essential if starch is the only source of energy. More recently, systematic attempts have been made to detect and identify those genes that are completely required to maintain life, provided that all nutrients are available [74]. These experiments have led to the conclusion that the completely required number of genes for bacteria is on the order of about 250-300. These essential genes encode proteins to maintain a central replicate DNA, metabolism, translate genes to proteins, maintain a basic structure, and mediate transport processes into and out of the cell. Most genes are not essential but convey selective useful and increased itness.
Determining the sets of genes necessary for survival of diverse organisms has helped to detect the fundamental processes that sustain life across an array of environments [75]. This study has also applied as the starting point for efforts by synthetic biologists to design organisms [76]. In spite of the importance of essential gene sets, they have traditionally been challenging to gather because of the dif iculty of observing mutations that result in phenotypes. Recently, the pairing of transposon mutagenesis with next generation sequencing (NGS), referred to collectively as transposon sequencing (Tnseq), has resulted in a dramatic advance in the detection of essential gene sets [77,78]. The important characteristic of Tn-seq is the use of high-throughput sequencing to screen for the itness of every transposon mutant in a pooled population to measure each mutation's effect on survival. This information can be used to quantitatively ascertain the impact of loss-of-function mutations at any given locus, intergenic or intragenic, in the conditions under which the library is grown [79]. Essential gene sets for 42 diverse organisms distributed across all three domains have now been de ined [80]. A recently developed variation on Tn-seq, random barcode transposon site sequencing (RB-TnSeq), further minimizes the library preparation and sequencing costs of whole-genome mutant screens [81].
In spite of the proliferation of genome-wide essentiality screens, a complete essential gene set has yet to be de ined for a synthetic organism. In algae, efforts are underway to produce a Tn-seq like system in Chlamydomonas reinhardtii; however, the mutant library currently lacks suf icient saturation to determine gene essentiality [82]. The absence of experimentally determined essential gene sets in organisms, despite their importance to the environment and industrial production, is largely because of the dif iculty and time required for genetic modi ication of these organisms. As a result, it has been developed as a model organism and a production platform for a number of fuel products and high value chemicals [83].

DNA synthesis
It was reported that several group were offering the synthesis of genetic sequences up to 2000 bp long and a period time of less than 2 weeks. Nucleotides harvested from an inkjet manufactured DNA chip incorporate with DNA mismatch error-correction permits cheap large-scale changes of codons in genetic systems to improve gene expression or combined novel amino-acids. 84 In Addition, the CRISPR/Cas system has appeared as a promising technique for gene editing. It was hailed as "the most important innovation in the synthetic biology space in nearly 30 years." While other methods take years to edit sequences, CRISPR speeds that time up to weeks [84].

DNA sequencing
Synthetic biologists develop use of DNA sequencing in their work in several ways. Firstly, large-scale genome sequencing attempts continue to provide a worth of information on naturally occurring organisms. Theses information provides a wealthy substrate from which synthetic biologists can create devices and parts. Secondly, synthetic biologists apply sequencing to consider that they fabricated their engineered system as intended. Thirdly, speedy, inexpensive and reliable sequencing can also facilitate fast detection and identi ication of synthetic systems and organisms [85].

Modeling
Models apprise the design of biological systems by permitting synthetic biologists to better predict system manner prior to fabrication. Synthetic biology will pro it from better models of how DNA encodes the information needed to determine the cell, how biological molecules bind substrates and catalyze reactions and how multi-component integrated systems act. Newly, multiscale models of gene regulatory networks have been developed that focus on synthetic biology usages. Simulations have been used that model all bimolecular interactions in transcription, translation, regulation, and induction of gene regulatory networks, guiding the design of synthetic systems [86][87][88].

Synthetic DNA
Dramatic decreases in expenditure of making nucleotides, the sizes of DNA making from oligos have increased to the genomic level [89]. For instance, researchers reported synthesis of the 9.6 kilo base pair of Hepatitis C virus genome from chemically synthesized 60 to 80-mers [90]. The 5386 base pair genome of the bacteriophage Phi was assembled in about 2 weeks [91]. The same group had made a synthetic genome of a novel minimal bacterium, M. laboratorium and were working on getting it functioning in a living cell [92].

Synthetic gene networks
Synthetic biological ON-OFF switches a set of genes can be choose and merged, to interact in a controllable and predictable manner, forming a system with a preset function also known as synthetic gene circuit. Than construct higher-order gene networks for advanced therapeutic applications, a toolbox of well-controllable standardized and well-characterized section should be available. In some cases, these simple gene networks are also used as therapies. For performing logical operations in cells, programmable Boolean logic gates were created in 2004, by incorporating heterogeneous transcription factors [93]. A gene network performed by a Boolean AND gate was applied for targeting cells where the AND gate activity was got when both pre-set situations were met, leading to the expression of apoptotic genes and cell death [94]. Boolean logic have also been engineered based on synthetic transcription factor (TF)-containing zinc inger motifs (ZF) and clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 motifs [95]. These are attractive sections for engineering higher-order networks because (i) CRISPR/Cas9 and zinc ingers can be made to recognize virtually any DNA sequence, (ii) they can function without interfering with each other. Indeed, the bacterial CRISPR/Cas system has been shown to be easy and versatile to use. Recently, Qi et al. (2013) showed that an endonucleasede icient Cas9 can be used as a programmable 'CRISPRi' tool for gene silencing in E. coli [96]. Inhibitory circuits in mammalian cells have been introduced using dCAS9 systems [41,97]. Recently, CRISPR regulatory devices were layered to get cascaded circuits and the expression of functional guide RNAs (gRNAs) from RNA polymerase II promoters and multiplexed production of gRNAs and proteins from a single transcript in human cells was made possible [98,99]. In the discussed switches, the switching molecule should be present in order to maintain the switch in either the OFF or ON state. To reversibly set the switch to OFF or ON positions by applying a trigger molecule, toggle switches have been developed [100]. Examples of how these toggle switches have been occupied include the presence of hormones or signalling molecules or monitoring the environment of immune cells in lymph nodes. Though more complicated in network topology, functionally, synthetic mammalian oscillators constitute synthetic biological parts that can be uni ied into higher-order circuits or to govern metabolic, signalling pathways and repair in mammalian cells. Such a synthetic oscillator has been developed by using a time-delayed negative feedback loop, but these systems have been shown to dampen their oscillations because of noise and/or epigenetic silencing [101][102][103][104][105]. The addition of a positive feedback loop may dominate these limitations and generate autonomous and tune able oscillatory expression of reporter genes [105]. A low-frequency mammalian oscillator has also been developed, by silencing of the tetracycline-controlled transactivator using siRNA encoded in the introns of the mRNA, in order to facilitate robust and autonomous expression of a luorescent reporter protein with periods of 26 h [106]. In order to generate transcriptional and translational time-delay for tuning oscillators, inteins could also be employed [107]. All of these synthetic biological control circuits described in this section contribute to the development of mammalian cell biocomputers [108].

Synthetic genomics
Synthetic genomics is an aborning ield of synthetic biology that uses aspects of genetic modi ication on pre-existing life forms with the purpose of producing some product or desired manner on the part of the life form so created.
Researchers were able to build a synthetic organism for the irst time. It was accomplished by synthesizing a 600 kilo base pair genome (resembling that of M. genitalium) via Transformation Associated Recombination and the Gibson Assembly method [109].

Synthetic life
One important subject in synthetic biology is synthetic life, arti icial life made in vitro from biomolecules and their component materials. Synthetic life experiments try to either study some of the properties of life, probe the origins of life or more ambitiously to rebuild life from non-living components. Synthetic biology tries to create new biological molecules and even novel living species. In the area of synthetic biology, a living "arti icial cell" has been de ined as a completely synthetically-made cell that can maintain ion gradients, capture energy, contain macromolecules and have the ability to mutate [110]. The irst living organism with 'arti icial' DNA was produced as Escherichia coli was engineered to replicate an expanded genetic alphabet [111]. A completely synthetic genome was produced and introduced to genomically emptied bacterial host cells [112].

The ethics and public acceptance issues
A variety of potential harms are being recognized with synthetic biology and relate subject. One way to carve up these potential harms is to individualize between what we call "physical harms" and "non-physical harms." These potential harms are not unique to synthetic biology or synthetic life, they are the same concerns that have been appointed in the context of other emerging technologies such as neuroscience, genetics, nanotechnology and stem cell research. In the literature, we observed justly consistent agreement about what might be the potential physical harms of synthetic biology, though there is disagreement about how likely those harms are to burgeon and about what action, if any and at what cost, should be taken for preventing them. Enthusiasts tend to adopt a pro-actionary approach to the hazard of physical harm, arguing that we should not search to interfere with the development of an appearing technology unless we have very good reason to suspect that it will cause serious physical harm. Alongside self-regulation, some enthusiasts defend the use of public funds for the kind of public engagement that seeks primarily to educate the public about risks and bene its so that members of the public can become informed consumers of emerging technologies. Critics (those who are concerned about advances) tend to adopt a pre-cautionary view, arguing that we should be prepared to interfere with the development of an emerging technology if we have good reason to suspect that it might cause serious physical harm, and they generally see such a risk in synthetic biology.
Critics defend for oversight, regulation and the kind of public engagement that shapes the development of emerging technologies, such as is practiced in some countries around genetically modi ied foods and other emerging technologies and is being employed and studied around nanotechnology [113][114][115]. Many people fall somewhere on the spectrum between critics and enthusiasts, inding themselves fuzzy between the insights of each side. On the question of non-physical harms, we observed some agreement among enthusiasts and critics that some nonphysical harms are discussing. While there is surely more work to do in identifying, conceptualizing and addressing these non-physical harms, there is already some acceptance, for example, of the legitimacy of the concern that patents might slow down research and of voluntary opensource practices as one way to address this concern. However, there are non-physical harms that have thus far received short shrift in discussion of synthetic biology. This group of non-physical harms centres around concerns about the suitable relationship between nature and humans and about whether humans must to create new kinds of life. We suggest that those who lead and fund synthetic biology search critically evaluate, and carefully describe, concerns about both physical and nonphysical harms.
In so doing, they should draw on our experience of these concerns in the context of other emerging technologies, including neuroscience, genetics and nanotechnology. It will also be important, when examining concerns about physical and non-physical harms, to seek to carefully describe, and critically evaluate, the various understandings of these concerns and suggested responses to them that are formulated from within both the pre-cautionary and pro-actionary frameworks. We need to better understand what individuals in our society mean when they cite a concern that some synthetic biology or synthetic life is against nature (or is playing God). For those who believe that the job of human beings is to shape themselves and the rest of the natural world, synthetic biology is a clear next step, and concerns about "playing God" are incoherent. While powerful, that understanding of our place in the world is but one very speci ic understanding. For those who believe that the job of human beings is to accept and "let be" some features of themselves and the rest of the natural world those questions are worth taking seriously. By better understanding exactly what values are considered at play in the context of synthetic biology, we will be in a better circumstances to understand what action would be reasonable to recommend or expect. As with other harms, we should draw on our experience of these concerns in the context of other emerging technologies, including neuroscience, genetics and nanotechnology. Understanding and respect can affect the choice of experiments and eventual products, the communication of results and the direction of publicly funded programs. It can also make more receptive those who might initially have opposed synthetic biology.

Conclusions
Abstract of trends in animal breeding and genetics also some relate subject has been shown in igure 1. New and conventional genetic architecture can be de ined using system biology information open opportunities for novel applications in animal breeding and genetics. Biomarkers of physiological states can be used to breed the best animals. Now, omics has been applied only to deal with genetic questions in a few species. Practical issues in collecting samples and implementing suitable experimental designs should be also considered, according to the sensitivity of omics pro iles to environmental conditions. However, advancements in this ield and synthetic biology are expected, moving the bottleneck on the interpretation and use of omics information in animal breeding and genetics for which new methodological developments will contribute to better de ine approach in the omics era. In addition, synthetic biology is an emerging interdisciplinary research ield combining biology, computational science and mathematics, which aims at creating models for dynamic interactions of system components. Animal sciences have arrived at the threshold of a genomics data explosion. It is now in a position to make most effective use of the improved knowledge on the structure, variation expression and synthesis of animal genomes. The application of synthetic biology approaches using these omics information will provide better insight into the biology. Consequently, it will provide opportunities to monitor, modulate, and improve animal. Synthetic biology approaches require a close collaboration between many different disciplinary scienti ic communities that share resources, knowledge and technologies, and that are willing to integrate their data sets. With the development of synthetic biology approaches, we are entering the era of a predictive theoretical biology for farm animal as well as genetic manipulations.

Acknowledgement
We would like to acknowledge F. Marandi for his helpful feedback on the manuscript.