Bipolar Junction Transistors

BJT (Bipolar Junction Transistors) - Introduction, Types & Structure

BJT - Bipolar Junction Transistors - Introduction, Types, Cross-Sectional View / Structure

You studied a two-terminal electronic device that is diodes. Now it's time to study a three-terminal electronic device that is a transistor. There are many different types of transistors but we stick to the most basic type of transistors. In this post, I am going to introduce Bipolar Junction Transistors.

Outline:

  • Introduction to BJT
    • Types
    • Circuit symbol
    • Current directions
    • Cross-Sectional View/ Structure

Introduction To BJT?

The term bipolar refers to the fact that current through the transistor constitutes from both minority and majority carriers that is holes and electrons.
It is a three-terminal or three-layer semiconductor device. These three layers are connected back to back. The left layer is called the emitter, the middle layer is called the base and the right layer is called the collector.

Types Of BJT:

Figure 1: Types of BJT and their circuit symbols

First of all, I would like to explain the simplified structure of BJT. Figure 2 shows the simplified structure of the device.

Look at figure 1(a), the transistor consists of three semiconductor regions: the emitter (n-type), the base (p-type) and the collector (n-type). This is an NPN transistor. An NPN transistor consists of two layers made from n-type semiconductors separated by a p-type semiconductor. So, two pn junctions exist in a single transistor. Look at the edges of the emitter and base. This junction is called emitter-base junction (or emitter diode). Now, look at the edges of the base and collector. This junction is called the collector base junction (or collector diode).

Similarly, look at figure 1(b), there is a PNP transistor. It is a dual of an NPN transistor. It consists of three semiconductor regions: the emitter (p-type), the base (n-type) and the collector (p-type).
a PNP transistor consists of two layers made from p-type semiconductors separated by an n-type semiconductor. So, two pn junctions exist in a single transistor. That is emitter-base junction and collector-base junction.

In both types of transistors, the emitter region is heavily doped. The base region is thin and lightly doped as compared to the emitter and collector. The collector region is moderately doped.

Circuit Symbol & Current Directions:

The direction of arrows shows the direction of the current. It will be a little bit confusing as a beginner to understand the current directions. Have a look at the emitter, there is an arrow on it. This because the practical BJT is not symmetrical. The arrow shows on the emitter terminal describe the conventional current directions. When we apply the KVL equation to the transistor circuit, we make use of conventional current directions.
I would like to write some basic equations. These will help you in learning BJT theory.

Voltage equations
VCB = VC - VB
VBE = VB - VE
VCE = VC - VE

Current equations
IC = β*IB
IE = IC + IB

Cross-Sectional View:

Cross sectional view of Bipolar transistor
Figure 2: Simplified cross-sectional view

All three semiconductor regions are differently doped. The base is always in the middle, lightly doped and highly resistive material. From the cross-sectional view, it is clear that the collector base junction has a much larger area than the emitter base junction. The emitter is heavily doped because the emitter should capable of injecting/emitting electrons (for NPN transistor) and holes (for PNP transistor) into the base. The lightly doped base is used for isolation in between collector and emitter. The surface area of the collector is large and moderately doped. When the emitter-base junction is forward biased and the collector base junction is reverse biased, the collector will collect all electrons emitted from the emitter.

The base terminal is also used to adjust the base-emitter voltage. Any change in base-emitter voltage will change the current between the emitter and collector significantly.


Diffusion & The Barrier Potential | The Unbiased Transistor:

Unbiased bipolar transistors diffusion and barrier potential
Figure 3

Look at figure 3(a). It shows a transistor before diffusion. As I discussed above, there are two back to back diodes. The emitter diode and the collector diode.


Of course, there are negatively charged electrons in the emitter region trying to recombine with positively charged holes in the base region.


Similarly, there are negatively charged electrons in the collector region as well. And these negatively charged electrons are also trying to recombine with positively charged holes in the base region.


Of course, there will be two depletion regions are formed just like in diodes. For each of these depletion regions, the barrier potential is 0.7V ( standard value for silicon devices). Figure 3(b) shows these two depletion regions.

Effect Of Biasing On Barrier Potential On BJT | Modes Of Operations:

When an external voltage is applied to the transistor, then it is called a biased transistor. There are many different biasing techniques available. You will learn more about biasing in later posts.


Why do we need biasing? With the help of biasing we establish the desired voltage and current conditions for the transistor (also termed as Q point). Whenever we want to design a circuit, biasing is necessary for the correct operation of the transistor. A beginner needs to learn how to bias. How to apply voltage if you want to derive a transistor as an amplifier? We can not obtain proper AC amplification without proper DC biasing.


BJT Configurations:

BJT has three terminals. Based on these terminals the transistor can be contacted into three different configurations. Each configuration has its characteristics. 

In each configuration, one terminal is an input, the second terminal is output and the third terminal is common in between input and output.
Out of these three configurations, a common emitter is extensively used. As you know, to drive a transistor in the active region, the base-emitter junction is forward biased while the base-collector junction is reversed biased. This condition is valid for all three configurations. 

Key Terms:

Alpha: The ratio of collector current IC to the emitter current IE is called alpha. It is always less than 1 (unity).


Beta: The ratio of collector current IC to the base current IB is called beta. Its value ranges from 20 to 200 or even higher.

Biasing: The proper application form of DC voltage, to derive a transistor into a suitable mode.

Frequently Asked Questions:

What will happen if collector and emitter are interchanged?

As I discussed above, BJT is not a symmetrical device. As you interchange the collector and emitter, the transistor will change its mode. Now the transistor works in reverse active mode. In this condition, alpha and beta are much smaller, because the device is optimised to work in forward mode.

BJT is not a symmetrical device. Explain.

As I discussed above, the emitter region is heavily doped than the collector region. The non-symmetrical behaviour is due to different doping ratios.

Analysis Of Algorithms

Analysis Of Algorithms - Time Complexity & Space Complexity
Analysis Of Algorithms
Outline:
  • Analysis of an algorithm
    • Time complexity
      • Types of time functions/Order of algorithm
    • Space complexity

In this lesson, you will learn about the theoretical aspect of the algorithms. In the previous lesson, you have learnt about algorithms.

Whenever you design an algorithm, you should keep in mind these two parameters that are time and space. How much time it takes to execute the algorithm, how much time it is taken by an algorithm. Whereas space complexity means how much space or memory is required for variables and instructions used in this algorithm. Time complexity and space complexity are the two most important parameters to analyse the efficiency of an algorithm and these should be independent of the machine.

As I discussed earlier often several different algorithms are available for a single problem. Sometimes it becomes a complicated task to select the best algorithm in terms of time and space.

Time Complexity:

Time complexity is defined as the amount of time taken by the algorithm. It is written as a function of the length or size of input n. It describes the efficiency of the algorithm in terms of a number of operations. As the number of operations or instructions increases, the computational time increases as well (of course!!). Generally, the lesser number of operations, the faster will be the algorithm. In this section, I am going to explain which algorithm is faster than the other. I am going to derive mathematical expressions for different algorithms.

Note:
Time complexity determines the worst-case run time.  This will explain in detail in later lessons.

Example 1:

Take algorithm for adding two numbers.

Start
Declare variables: num1, num2, result
Result = num1 + num2
Display result
Stop
Count the number of operations the computer should perform.

The total number of steps/operations = 3
Time is taken by each step = 1  unit of time
Time is taken by all step = 3  units of time
Time complexity = O(3)

Where O(3) stands for the order of 3. This will explain in details in the upcoming lecture.

I assume each step takes one unit of time to execute. In real applications, this is not happening. In computing, every statement takes a different amount of time to execute. Execution time also depends on many other factors as well.

Evaluate Time Complexity (Examples) | Order Of Algorithm| Types of Time Function and Order of growth:

We write time complexity like this O(n). This is called a big O notation. This is an asymptotic notation. We will discuss this in the upcoming lecture. The common time complexity functions and their examples are given below.

Constant: (any constant number)

Example is a statement that is simply adding two numbers. I have discussed the algorithm for adding two numbers. It is an example of constant time function

Linear: N

An example is a single loop.

Example 1: Let's take an algorithm for 'for loop’.

Start
For numbers 0 to n
Print “hello world”
End

Another way of writing the similar algorithm

Start
For (i = 0, i<n, i++)...... 'n+1’ times
Print “hello world” ……'n’ times
End

Both algorithms are the same, having the same number of instructions or operations. The second algorithm uses for loop syntax. Actually for loop is a high level programming language construct (part of programming language. Syntax may vary from one language to another). So, for loop remains the same in any programming language.

Let's begin. How many steps/ operations or instructions in the above algorithm?

For loop iterates for 'n+1’ times. Print statement repeats 'n’ times.

The total number of steps/operations = n+1
Time taken by every step = n+1 units of time
Time complexity = O(n)

I omitted +1 because it is a constant and order remains the same. It means this algorithm process the input in n units of time.

Example 2: Let's take an algorithm for 'while loop’.
We want to investigate how many times the loop iterates. And hence the time taken by the program.

Start
Declare variables i = 0, n ... executes 1 time
While ( i < n )  ….. executes 'n+1’ times
Print “hello world”. …. executes ‘n’
i++    …. executes ‘n’
End

The total number of steps/operations = 3n+2
Time taken by all steps = 3n+2 units of time
Time complexity = O(n)

Logarithmic: log N

Example is binary search (divide in half). Here for/while loop has logarithmic iterations.

Example 1:

Start
Declare variables i,n ….. executes 1 time
For (i = 1, i<n, i = i*2) ….
Print “hello world”…..executes log2 (n) times
End

I draw a table. With the help of this table you can understand how does the loop repeat and how does value of 'i’ change?

i
i < n
i = i*2
Explanation
i = 1
i < n
i = 1*2 = 2
Simple multiplication
i = 2
i < n
i = 2*2 =4
In each iteration variable i is multiplied by 2
i = 4
i < n
i = 4*2 = 8
i = 22*2
We can generalize
i = 22*2
i = 8 = 23
i < n
i = 8*2 = 16
i = 23*2

i = 16 = 24
i < n
i = 16*2
i = 24*2

i = k = 2k
2k < n
i = k*2
i = 2k*2
Suppose the loop repeats itself 'k’ times. We got a general term.

We can write 2k <= n
Apply log on both sides.
k = log2 (n)
The loop iterates log2 (n) times.

The total number of steps/operations = log2 (n)
Time taken by all steps = log2 (n) units of time
Time complexity = O(log2 (n))

Note: most of the time you get fractional values. But you will take the whole value. Round off the result, you will get the actual value

Example 2:

Start
Declare variables i = 0, n ... executes 1 time
While ( i < n )
Print “hello world”…. executes log2 (n) times
i = i*2    …. executes log2 (n) times
End

All calculations are the same for both loops.

The total number of steps/operations = log2 (n)
Time taken by all stepss = log2 (n) units of time
Time complexity = O(log2 (n))

Quadratic: N2

Example is double (nested) loops.

Now again we use algorithm for ‘for loop’. It means loop within a loop. There is an example for nested loops. Consider 10 boxes, in each box there are 10 balls. How do you count them.
  • First you count the total number of boxes
  • Open each box and count the number of each box
  • Each box contains 10 balls.
  • 10 boxes contain 10*10 balls

Same strategy is applied for nested loops.  In other words you can say that there are two linear nested functions

Start
For (i = 0, i<n, i++)...... 'n+1’ times
For (j = 0, j<i, j++) ……’n’ times
Print “hello world” ……'n*n’ times
End

The total number of steps/operations = (n+1)*n
Time taken by all steps = (n+1)*n units of time
f(n) = (n+1)*n
O(n) = n2

Space Complexity:

Space complexity is also a measure of efficiency of an algorithm in terms of memory. An algorithm is efficient if it occupies less memory than the other.

S(p) = C + Sp

Where
S(p) = space requirements
C = constant, it is the space taken up by variables and instructions.
Sp = space characteristics, it is a variable part. Space dependent upon instance characteristics. It varies from one algorithm to the other.

1. To analyse an algorithm there are two important parameters
  1. One
  2. Two
  3. Three

2. Time complexity means the amount of time taken by an algorithm or it defines efficiency of algorithm.

  1. True
  2. False

3. Space complexity determines the efficiency of algorithm in terms of space

  1. Space
  2. Area
  3. Length
  4. None

4. The example of quadratic time function is double loops.

  1. Single loop
  2. Nested loops
  3. Double loops
  4. None

5. Single loop is an example of linear time function

  1. Cubic
  2. Quadratic
  3. Linear
  4. Non linear

6. The formula for space complexity is S(p) = C + Sp

  1. True
  2. False

7. Time complexity for the function below is
Start
For (i = 0, i<n, i+2)
Print “hello world”
End

  1. Linear
  2. Quadratec
  3. Cubic 

Popular Posts