We at Tipstat have faced a lot of these challenges with offshore software development over the years and would like to share our insights with clients and companies.
What makes a project successful?
We can say a project is completed successfully if the project meets these three parameters:
1. Its predicted timeline
2. It’s budget
3. The goal
It’s quite simple to write down the various methodologies followed for completing a project. Finding an idea, discussing the various aspects of the idea, modifying it based on related previous experiences, analyzing merits and demerits and surveying the success of the project, calculating the resources and budget of the project, etc are some of the steps put forward by successful project management experts. But still, the rate of project failures is surprisingly getting high. Let’s check out why most IT projects get delayed, canceled, or go over budget.
Why do IT projects fail?
According to The third global survey on the current state of project management, The poor estimation during the project planning is the largest contributor(32%) to the project failure.
Even though this is an era of astounding technologies, nobody has invented a common solution to complete a project successfully. The reasons for IT project failures can depend upon client-related issues, project team-related issues. Let’s discuss them in detail.
The whole future of the successful completion of a project depends on the vision of the client. If the client has made a perfect road map that can be flexible moving forward based on real-time issues, then the project has a good chance of getting completed successfully. The problem arises when:
1. The scope of the project exceeds the budget: The idea of the project gets expanded when the development starts and with such expansion increases the cost. Other reasons can include sudden economic changes (war, resource price hike or unavailability, etc), loss faced in clients other projects, unavailability of funds from promised sources, etc.
2. Lack of interest: Some projects show their true colors in the middle of the development. The client might have felt that the idea is easy but due to the lack of proper research, such clients will re-evaluate that the complexity of the procedure of their idea implementation is high and that the project may not have the expected potential if finished.
3. Another opportunity: If the client gets a different offer that can give them much better results, then there are chances that they can either pause or shut down the current project.
4. Partnership fights: If the client has multiple partners and they get into fights that result in goodbyes can end the future of the project as well.
Project Team related issues
1. Lack of a proper team: If the assigned project is complicated and there is no one in the team experienced enough to handle it, then the project can miserably succumb to non-existence. Also, if the same team is handling multiple projects from different clients, long delays or failures can happen.
2. Lack of proper communication: This is one of the major issues faced in all projects. If the client fails to convey the exact idea of the end product that they are expecting, then the project team will proceed on a road that will take them to nowhere. Similarly, if the internal communication between the sales team, development team, and testing team is not in the same frequency, then the chances for project failure increases.
3. Timelines, Deadlines: There is a saying that “It’s always easy to begin something but difficult to complete it”. For the successful completion of an IT project, it’s important to schedule proper timelines. There will always be a deadline marked by the client. Due to the unprofessional approach of the team, deadlines can be delayed.
4. Changes in the project team: Even though a project is handled by a team of 4-6 members, in most cases there will be a single brain that storms out the best ideas and accomplishments. If the team members who are more experienced and talented are removed from the project, then the expected deadlines cannot be met.
5. Lack of proper tools, resources: If the project team is not using standard scientific methodologies in planning, preparing, implementing, and auditing the project, then the chances for project delay, deviation from the core idea, and failure can occur.
6. Testing vs Development: There is always a war between the development team and the testing team. In most cases, this competition is good for the refinement of the project. But if the war is unhealthy, then the project gets delayed or canceled.
User- related issues
The users or customers are the ultimate group of people who decide the success of any project. If the project can’t convey its true purpose, then it is considered useless even if it gets completed. When a project team works independently in a closed environment, they lose open communication with users and fail to get feedback from them. So at different levels, the project team invests time resources, and money on things that are not needed by the end-user. The user may not even use the product due to the user interface complexity.
According to the Project Management Institute(PMI), “There is no single method or organizational structure that can be used to manage projects to success”. Let’s see some of the most common methods used to resolve the issues that can cause project failures.
1. Keeping clients in the loop from start to end: This is very important. There should be frequent communication between the client and the project team. The client should be informed about each step taken in the project cycle. Similarly, the project team should set up presentations and detailed reports after the completion of each milestone.
2. Proper Project Planning: The bigger the project, the smarter the plan should be. There should be a well-documented guideline or project plan to follow. This guideline has to be created in the presence of the client(in most cases the client will provide a guideline, the team can modify that based on their workflow). The document should be followed in all cases. The selection of proper members in the team is also important. The project team should contain people who have prior experience and deep insight with similar projects.
3. Adapting to the real-time changes: According to THE HARVEY NASH / KPMG CIO SURVEY 2017, 64% CIO’s say that the political, business and economic environment is becoming more unpredictable (Source:https://www.hnkpmgciosurvey.com/pdf/2017-CIO-Survey-2017-infographic.pdf). Hence the project team should always be willing to face unpredictable problems in the project roadmap. Identifying real-time issues and solving them without exceeding the deadline is the key part.
4. Communication is the key: According to the third global survey on the current state of project management, Implementing efficient and effective communication strategies positively affected projects quality, scope, business benefits, performance levels, etc
(Source: https://www.pwc.com.tr/en/publications/arastirmalar/pages/pwc-global-project-management-report-small.pdf ).
Studies show that projects with healthy communications are having more success rates. The client-project team communication and interpersonal communication inside the team should be effective.
5. User is the king: Even if the project met all the parameters that are essential for its success, it can miserably fail due to one factor: customer satisfaction. There should be a bridge between users and the project team so that they can test the real-time usage and functioning of the project with users. This is the reason why famous companies release a beta version before launching the product.
Hence we can say that the project management process requires a good investment in planning, strict feasibility checks using effective tools, efficient communication, an experienced team, vigorous testing at each level, and active user interaction sessions.
What are the Issues with Offshore Software Development?
1. Cultural Differences
It has been scientifically proven that people from different regions react to the same situations in different ways. And the reason for this is the cultural differences.
For example, Asians can’t tolerate direct criticism even if they are wrong. The client has to find a convenient way to present what went wrong. But the European culture is comparatively open to criticism.
The cultural differences between Asian countries and European countries are entirely different. In a research study based on the behavior of Indian vendors and german clients, it was found that the Indians are weak in saying “no” to several situations.
It further explains that offshore developers in India have a tendency to offer a service attitude due to which it is difficult for them to say no to something or to tell a piece of bad news. This can be bad for software development.
The cultural differences impact interactions, communication, interpretation, understanding, productivity, comfort, and commitment.
2. Expecting the Impossible
If your client is coming from a different working domain other than software (like the wooden industry that needs an e-commerce site for their finished products, logistics, travel, and tourism, real estate, etc) industry, then their understanding of the software workflow may not be identical with yours.
The client logic about the whole process can be similar to “press the switch and the lights are on!”. If your client is putting forward an idea that is close to impossible, then the project can get stuck in the early phase itself.
3. The ‘Long Distance Relationship’ Issues
As you go out of your country or continent to find out the best third party partner for completing your project, there is another factor that you should be worried about. The time zone!
Let’s talk about the USA and India. If you are based in the USA which is almost 9 hours and 30 minutes behind India, then your active working time has to be rescheduled for the betterment of the project.
In case the Indian offshore software development company provides you a sample data based on the instructions mentioned, and if the delivery time is between 9 am to 6 pm (normal Indian office hours), then you would be probably sleeping (11 pm – 8 am)!
However, if the sample provided needs to be modified, you have to analyze it and should provide detailed feedback. Again, by the time you provide feedback, your third party partner firm would have logged out.
So it can take roughly 12-24 hours to complete a communication cycle unless one of the two becomes flexible and agrees to follow the same time format.
4. Lack of Requirement Clarity
This is a major issue that needs complete attention. If the project is yours, then it’s quite easy to describe each of the project goals and requirements without any confusions or hesitations.
But if the project is for your client, who hasn’t provided a perfect description of the project, then the chances for the project to become close to worthless is high.
The first thing to make sure of is, whether you have the precise knowledge of the requirement. The second thing is you should share the requirement with detailed instructions to the software outsourcing company.
If you fail in any of this, then the project can fail miserably.
5. The Cost Issue
The whole idea of software outsourcing was mainly because of the cost advantage. But what happens if the cost surpasses the estimated limit? Yes, it’s a big problem!
Normally, offshore software development works are calculated in a rate per hour method. But the lack of experience in the stream increases errors and software testing time unpredictably.
If the actual work can be done in under an hour, the errors can delay the work for days or even weeks! And the cost per hour can exceed the budget limit.
What are the Solutions?
1. Overcome Cultural Differences
This is one of the most important topics to deal with while choosing an offshore software development company. Especially, if the project is handled by a group of people from Europe as well as Asia, the difference in culture will be evident.
The Asian culture tends more to work based on well-documented guidelines whereas Europeans are more into logic-based documentation. Effective communication is a key factor in such heterogeneous groups. Adjusting meetings based on team timezones is also important.
However the most important of all is to understand the common goal of the project. One has to make sure that they as well as the team members are on the same page of project workflow.
Setting up short milestones in the road map and a detailed evaluation of the milestones after their completion can be very helpful. Sometimes offline meetings can help reduce confusion in understanding the goal of the project.
2. Does this Project Need Outsourcing?
This should be the first question in your mind while analyzing the project in your hand. The project analysis should be based on variables that can influence the completion and profit of the project.
The size of the project, the average time needed for the completion, nature of the project, various costs related to the project, etc are some of the factors that have to be taken into account while calculating the scope of the project.
Once you cross-check the detailed report based on all these analyses, you will get a better picture of whether you need an offshore development company or not.
3. Choosing the Most Suitable Offshore Development Company
Once you have confirmed the need for outsourcing by analyzing the scope of the project, the next step to do is to locate the most suitable third party firm. There is nothing called the best firm.
Based on the nature of the project, you have to choose the most suitable outsourcing partner.
Let’s discuss some factors that can help identify the choice.
Value Over Cost
Consider two firms namely A and B. Outsourcing company A bids with $100 cost and B bids with $50. Now the easy way is to pick B over A based on cost.
But the right way is to identify the value of the A firm and B firm by running some background checks. Technology can help you to locate the previous works done by the firms.
If they have showcased successful projects similar to the one you have in your hand, then that firm is the best one for you due to the previous experience.
Also, you can try to get connected with their previous clients who can share their experiences with you. If the clients are available, you will be able to get details on their working nature, customer satisfaction, deadline keeping, and reporting.
This is the most important part of the partnership. Since the project idea and related resources are of high value, having a legally bound agreement is a must.
Agreements make sure that you are dealing with a reputable outsourcing company and allows you to relax that your project is in safe hands. Ownership right agreement, Contract/Sow, Termination rights agreements, NDA(non-disclosure agreements), etc are some of the essential agreements when you deal with an offshore enterprise.
Do you want to outsource your software requirements? Connect with us today and build a reliable team of offshore developers easily!
4. Proper Project Management
Another important aspect to avoid outsourcing issues is proper project management. After preparing a full-proof SOW, the next thing should be preparing developer-friendly guidelines for the easy understanding of the project.
Guidelines can be made using powerpoints, documents with diagrams, data flow charts, etc. Such detailed documentation can help the developer to understand the project closer.
You can fix timelines by breaking the big project into small modules. Predicting a timeline for the complete project may be difficult. But it is easy to determine the timeline of the completion of each module.
There should be project meetings with the project handling team at regular intervals, preferably after the successful completion of each module.
Advantages of Offshoring to India vs other countries
Being first in the countries with most software companies that have ISO 2000 certification and second to the US in software exports, India undoubtedly is on the top list of countries favorable for offshoring.
The 2016 A.T.Kearney Global Services Location survey shows that India is the first choice for BPO. Let’s discuss the advantages of India over other countries in offshoring.
Indian offshore developers work for an average cost of $10 – $20/hr which reduces the complete project cost to a significant level. The high competition among Indian companies also creates a cost-effective platform for offshoring.
2. English Language Advantage
Thanks to the 200 years of British rule in India, the convent school culture taught Indians good English which makes them the best choice for offshoring in comparison to other offshore competing countries like Ukraine, Malaysia, and China.
Indian offshore developers are highly sought after in the software industry. According to NASSCOM, most of the FORTUNE 500 companies around the globe use Indian built software that shows the level of quality standards.
Also, the highly qualified and experienced experts make no compromise in software build quality.
4. Support, Maintenance, and meeting Deadlines
Indian offshore development companies offer extensive support to their software. Some companies offer a lifetime maintenance assurance with 24*7 support lines. Indian companies are well known for delivering software products before deadlines.
ISO and SEI CMM based work standards, timezone flexibility, commitment, stable and calm political environment, IT-friendly laws and policies, etc are some of the other significant advantages.
Based on a study by Evans Data Corp in 2013, there were approximately 2.7 Million software engineers in India. In short, researchers point out that India will overtake the US by 2024 to become the country with the largest software developer population.
5. Cost Analysis (the US vs India)
Australia is considered as the top paid country in the software industry around the world, and the US comes second in the list, based on Time Doctor, DOU.
Since India comes 9th on the list, it shows the relevance of the increase in the offshoring software business.
According to various job sites, The average annual salary for a software developer in the US is greater than 100,000 USD whereas the average annual salary for an Indian software developer comes under 8,000 USD.
Hence, by comparison, it can be seen that you have to spend an amount of 12-13 times higher for a US developer compared to an Indian developer.
Meet the Robot Chemist
Three years of experimental research by a team under Professor Andy Cooper, University of Liverpool, has finally come up with an astounding invention: A Robot lab assistant! The core idea behind the research was to create a machine that can move on the lab floor and perform experiments just like a human lab assistant.
The robot has to be custom programmed based on the laboratory it is installed in. But once completed, it can handle the assigned tasks for 22 hours a day and 365 days of the year unless an unexpected maintenance is required. It takes approximately 2 hours for a full charge.
Benjamin Burger a Ph.D. student who led the scientists in a trial experiment said that the work covered by the “new lab assistant” is 1000 times faster than the average humans which is remarkable. Andy Cooper on the other hand has given more emphasis on his vision to free the human brains from the repetitive and boring experiments in the research labs.
The robot components were from KUKA(is a German manufacturer of industrial robots and solutions for factory automation). For the experiment, they used a mobile robot-hand, mounted upon a mobile base station. The robot arm could carry a weight of up to 14Kg and can stretch up to 820mm. The whole system weighed 430Kg and the speed of the robot was limited to 0.5m/s due to security reasons. The robot hand was equipped with a multipurpose gripper capable of handling sensitive glass vials, cartridges, and sample racks.
The robot automatically charges its battery whenever the charge drains to a 25% threshold in between the works. The robot was idle for 32% time mainly because the gas chromatograph analysis was a time-consuming process. The AI guidance used in the robot was based on the Bayesian optimization algorithm. Even though the robot was equipped with basic parameters needed for experimenting, it used the algorithm for deciding 10 different experiment variables. The machine navigates labs using LIDAR, the same laser-based technology found in self-driving cars which allows it to work in dark conditions also.
The Successful Experiment
The experiment given to the robot was to develop photocatalysts, which are materials used to extract Hydrogen from water using light. This area of research is crucial for green energy production. Unlike other machines that are programmed to perform a set of pre-recorded instructions, this robot loaded samples and mixed it in fragile glass vials, exposed them to light, and conducted gas chromatography analysis. The major turning point in the experiment was Its adaptability to the workflow just like a human lab assistant.
In an Eight-day period, the robot conducted 688 experiments, made 319 movements between various stations, and covered a total of 2.17Km during the whole experiment. Based on the experiment reports, it was found that the robot did an amazing job that could’ve taken several months for a human.
The limitations of a human lab assistant
To show the advantages of machine arms, let’s have a look at how a human lab assistant performs the lab chores. A normal lab assistant can be either a full-time lab assistant/technician or can be an aspiring Ph.D. student who does his/her part-time job while working with his/her thesis. As per the normal human working hours, they may work for 9-12 hours a day or more, depending upon the nature of the research. Considering the possible break hours for coffee, snacks, chit-chats, lunch, smoke/bathroom intervals, etc, the working hours shrink again.
Also, we have to consider the emotional issues that can make the human mind cloudy and thus affect their judgment in vital scientific hypotheses and conclusions. And no humans can work consistently for more than 20 hours a day, 7 days a week, 365 days a year. That’s where Laboratory Robotics becomes significant.
A helping robotic hand
There are several areas in a lab research facility where a robot becomes handy. Those areas include handling extremely dangerous chemicals that can instantly kill or cause high damage to humans, biohazard conditions, high radiation-emitting experiments, laboratories that are constructed underground or places with low-level oxygen, and other health-supporting supplies, etc.
Is this robot a replacement for humans?
“Absolutely No!”, says Professor Andy Cooper, the head of the robot research department. According to him, the whole idea behind this robot assistant is to save the most valuable parameter in the scientific world, which is time. Cooper further assures by saying that neither the robot invents or designs the experiments nor does it come up with any hypothesis. It acts as a tool to perform all the tedious, boring, and repetitive works in the lab as part of an experiment in which human brains are not to be wasted.
Cooper with his company named Mobotix is planning to commercialize this revolutionary work in the coming months. His idea is to create machines based on different criteria, Say, a robot researcher, a robot technician, a robot scientist, etc. The cost range also varies with the capabilities of the machine. The basic hardware can cost anywhere between $125,000 and $150,000!
The collective research done by Facebook’s AI research team(known as FAIR) and NYU Langone health experts has come up with some interesting news on the field of Magnetic Resonance Imaging or MRI scanning. According to their research, an AI-based technique called FastMRI can dispense output data four times faster than the normal MRI method.
The usual MRI procedure
The Magnetic Resonance Imaging technique allows doctors to identify various issues related to the spinal cord, brain, neck, and can identify problems in joints, chest, abdomen, blood vessels, etc. The duration of this process can be anywhere between 45 minutes to 1 hour depending on the type and nature of the MRI scan.
It is generally an unpleasant situation for any patient when their doctor asks to perform an MRI. Especially for the scans like the head or brain, the patient almost feels like they are in some serious jeopardy. Patients have to lie down on a table that slides into a giant tube for performing MRI. Patients have to remove all of the metal accessories(hairpin, Zippers, Jewelry, Body piercings, etc) from their bodies including removable dental implants.
When the procedure starts, the machine will make loud buzzing noises. Even though the patient is allowed to use earplugs, the sound coming from the giant machine can be frightening. The magnetic field strength generated in the room will be more than 20 times that of earth’s magnetic field. Also, the patients are asked to hold their breath for 20-30 seconds as part of the procedure. While considering all these situations, It is quite normal for patients to feel anxious and nervous during the procedure. Especially if the patient is a kid or a claustrophobic, MRI scans can be very difficult to perform.
What is FastMRI
The Facebook AI research team and NYU Langone Health, started this program two years before to speed up the existing MRI scan time. Their research was aimed at finding a technology that can reduce the duration of the current MRI Scans. They have trained the AI called FastMRI with a large dataset of MRI scan results and possible patterns. The AI, with the large database, can come up with the scan result much faster than the ordinary scan.
Based on the study, the scan reports of a traditional MRI which took one hour were analyzed by radiologists who performed the same analysis with the results of FastMRI. And they found out that both scan reports returned with identical results.
What are the benefits of FastMRI
The medical world is excited to see the promising results of FastMRI due to the tremendous achievements that can follow with its implementation. FastMRI setup is important where the ratio of MRI scanning machine and the patient count is unmatched. Considering the duration of the current MRI system, it is quite difficult to complete the scan of a number of patients using
one machine. The latest FastMRI needs four times fewer data than the existing one to generate scan results. This means, patients suffering from anxiety, claustrophobia, or child patients don’t have to spend a long panic time in the scan room. And the productivity of a single machine can be increased multiple times allowing the service to be reached to more people in need at the right time. It also helps doctors to perform a quick scan on emergency patients suffering from situations like strokes or breakdown. Also, FastMRI can be a good replacement choice over CT scans and X-rays for some typical cases.
Supercomputers are probably the most reliable research tools for scientists and researchers. They play a very significant role in the field of computational science and a wide range of computationally intensive tasks and jobs such as in quantum mechanics, climate research, weather forecasting, molecular modeling, physical simulations, properties of chemical compounds, macro and micro molecule analysis, aerodynamics, rocket science, nuclear reactions, etc. Faster the supercomputer faster and accurate is the computational ability for research. Engineers and scientists incessantly develop better supercomputers with the advancement of time. Hence annually the supercomputers from all around the world are ranked according to their speed. This time Japanese supercomputer ‘Fugaku’ which is at the RIKEN Centre for Computational Science in Kobe has topped the list. It is a successor to the ‘K computer’ which topped the list in 2011. Fugaku will be fully functional from 2021.
This supercomputer has been mended with the Fujitsu A64FX microprocessor and the CPU has the processor architecture based on ARM version 8.2A which adopts the scalable vector extensions for supercomputers. Fugaku was aimed to be 100 times more powerful than its predecessor the ‘K computer’. It has been recorded with a speed of 415.5 petaflops in the TOP500 HPL results which is 2.8 times faster as compared to its nearest competitor ‘Summit’ by IBM. Fugaku has also topped the list of other ranking systems like the Graph 500, HPL-AI and HPCG where the supercomputers are tested on different workloads. This is the first time that any supercomputer has topped all the four ranking systems which makes it significant reliability for future purposes.
The cost of this supercomputer was estimated to be around 1 billion USD which is around 4 times more than that of its next competitor ‘Summit’. This humungous cost on the project has caused a significant controversy from many experts. According to the New York Times, similar featured exascale supercomputers will be developed in the near future with a very low cost as compared to Fugaku. There has also been heavy criticism of the government as some speculate that the government is spending way too much on this project just to be primal on the list amidst the pandemic.
Recently Fugaku is being used in the research for the drugs of Covid-19, diagnostics, and simulation of the spread of the coronavirus. It is also being used to track and improve the effectiveness of the Japanese app used for contact tracing in case of contamination. According to the Japan Times, in the latest research, the supercomputer was used to conduct molecule level simulations related to the drug for the coronavirus. A simulation on 2,128 existing drugs was made and picked dozens of other drugs that could bond easily to the proteins. This simulation was run for 10 long days. The results were quite accurate as 12 of the drugs detected by it were already undergoing clinical trials overseas. This research exalted the hopes of scientists for a remedy of the virus.
The expert team will continue their research using Fugaku and they have also announced that they will negotiate with the potential drug patent holders so that clinical trials to develop a possible drug for the virus can be carried out. This will allow starting early treatment of the infected people.
According to the experts, the supercomputer will also be likely effective to predict and study earthquakes in the future. Japan has a bad history of earthquakes since the country lies above the junction of many continental plates as well as the oceanic plates surrounded by volcanoes. Fugaku can detect chances of earthquakes which will allow the government and the locals to follow an escape plan from the natural disasters.
Scientists on the International space station have made the fifth possible state of matter known as the Bose-Einstein Condensate. The other four classical states of matter are solid, liquid, gas, and plasma. Bose-Einstein Condensate or the BEC is classified as a modern state of matter.
What actually is a Bose-Einstein Condensate?
Basically, Bose-Einstein Condensate is formed when a very dilute and low-density gas of bosons are cooled down to a very low temperature, a temperature that is very close to the Absolute Zero (-273°C). This temperature is low that the atoms of the boson particle occupy the least and the same quantum state. At a very low and same quantum state, the distance between the atoms can be compared to their wavelength, this extremely minute distance between these atoms allow them to behave as a single atom. This behavioral change allows the microscopic quantum phenomena to act as a macroscopic phenomenon and hence allows to detect even the non-detectable.
BECs are made in the coldest place of the observable universe, The Cold Atom Lab (CAL) which is a lab in the International Space Station orbiting at a height of 408 km! Yes, the Cold Atom Lab is the coldest place in the known universe and has the capability to cool down the particles in vacuum down to one ten-billionth (1/10^10) of a degree above absolute zero. That temperature is equivalent to the absolute zero but not equal to the absolute zero as temperature as low as absolute zero is not possible practically.
How is a BEC prepared in the Cold Atom Lab?
To prepare the BEC, atoms of Boson in the form of gas are injected into the Cold Atom Lab. These atoms are then trapped and confined within a dense space with the help of a magnetic trap made using electric foils. Once the atoms are trapped laser beams are used to lower down the temperature of the atoms. Once the fifth state of matter is reached the main problem of studying and analyzing it commences. To study the condensate the atoms are released from the magnetic trap to analyze its characteristics. When the atoms of the BSE are allowed to separate the temperature of the particle further reduces gradually as gases tend to cool down when they expand. But if the atoms get too far apart then they don’t behave like a condensate and start to show characteristics of individual and multiple atoms again. This hypersensitive nature of the particle allows the researchers only a tiny span of time to study it. Gravity also plays a very crucial role in the experiment which is why the experiment ought to be done in space rather than on Earth!
Why is the experiment carried out in the International Space Station?
There is a significant reason for carrying out the experiment in the International Space Station or in space generally. If the experiment is performed on Earth while increasing the volume of the condensate, the gravitational force of earth will attract the atoms downwards in the apparatus, and thereby the atoms will spill out on the base of the apparatus. To tackle the hindrance of the gravitational force of earth researchers came out with a plan to allow the condensate to free fall creating a perpetual escape from the effect of gravity on it. Earlier this method was tried in Sweden where the condensate in the apparatus was allowed to free fall at a height of 240km in the lower orbit of Earth. This created a free-fall condition of approximately 6 minutes.
Eventually, the International Space Station was decided for the experiment as objects, satellites and the ISS are in a state of permanent free-fall in the lower orbit of the earth. This principle allowed this experiment to be carried out for a longer period of time providing enough time and data to analyze and study the live form of the Condensate. This experiment was carried out for a total period of 1.118 seconds albeit the goal of the researchers is to detect the live condensate for a significant period of more than 10 seconds.
The Cold Atom Lab was launched by NASA in 2018 at an estimated budget of $70M. The lab is just 0.4m³ in dimension and contains the lasers, magnets, and other essential components to control, trap, and cool down the atomic gas for the experiment. The atoms are initially held at the center of a vacuum chamber and later transferred onto an ‘atom chip’ which is located at the top of the vacuum chamber. Further, the fractionally hotter atoms are eradicated from the chip using radio waves thereby leaving behind extremely cold atoms at a temperature remainder at less than a billionth of a kelvin.
Although the study and experiment of this highly irregular new state of matter are in its inception in the future it can serve extremely significant inventions and discoveries. Being an ultra-sensitive particle Bose-Einstein Condensate can be the basis of ultra-sensitive instruments that can be used to detect faintest signals and other mysterious phenomena in the observable universe like the gravitational waves and dark energy. Researchers have also counted its significance in the construction of inertial sensors like the accelerometers, gyroscopes, and seismometers.
Hundreds of other similar and crucial experiments and studies can be performed in the International Space Station which proves its significance with the property of free fall. Currently, scientists and researchers are interpolating with the new state of matter creating unique and arbitrary conditions in a hope of discovering and inventing something novel. Though they can now create Bose-Einstein condensate in the space, they are trying their best now to increase the duration of the experiment.
The world is turning to automation and so is the automobile industry. There has been a rapid and significant expansion and development in the autonomous vehicle industries and the AIs controlling the vehicles on the roads even in the dreaded conditions. After the coronavirus pandemic different self-driving automobile industries and start-ups have to lay off their real-world data collection work which required a team of operators and the vehicles itself on the road. The lockdown doesn’t allow these organizations to work ethically on the streets for the autonomous driving industry. But this lockdown has derived new ideas for this industry. The researchers have come up with new ideas and techniques of creating a virtual simulated world for the training and development of these automated vehicles, all they need is the data they have collected over the years in the real world and map them on the virtual world simulators.
Apparently, Waymo which is a software company in the self-driving industry and Alphabet Incorporation as its parent company has offered its gleaned data and information to the research organizations for the development of the virtual world simulator and autonomous driving. Waymo’s role in data sharing has been considered significant and crucial because the vehicles of Waymo have already covered millions of miles on the roads in different conditions. Other companies like Lyft and Argo AI have also contributed majorly by open-sourcing their data sets.
The data is collected via different high technology devices in the field. The vehicles are covered with multiple sensors including several cameras, RADARs, and LIDARs (Light Detecting and Ranging). The equipment bounces the laser off the surface of the nearby objects and hence 3D images of the surroundings are created. Waymo’s data contained 1000 segments of each apiece encapsulated 20 seconds of continuous driving. More new firms have decided to contribute the data to the researchers where transparency will play a significant role.
Data labeling has been an integral part of the simulators parallel to the 3D images generated. The organizations are now equipping the operators of the vehicles with the knowledge of Data labeling instead of just laying them off. This will compose the industries with new skilled associates who will come handy after the lockdown when they will resume their initial roles. Aurora Innovation which is a Palo Alto based company has taken a similar approach to join their operators in the data labeling sector.
New companies like the ‘Parallel Domain’ provide the autonomous vehicle companies with a platform that generates a virtual world using computer graphics. Parallel Domain was started by former Apple and Pixar employee Kevin McNamara who has experience in the autonomous system projects said that “The idea being that, in a simulated world, you can safely make a mistake and learn from those mistakes, also you can create dreadful situations where the AI needs to be trained essentially”.
Aurora Innovation on the other hand claimed to be using their “Hardware in the loop” simulation (HIL Simulation) which is a simulation technique that is used in the development and test of a complex real-time embedded system. This simulation helps in adding all the types of complexities that a system should sustain. According to Chris Urmson, this procedure is aiding them to detect the software issues which can defy the developer’s laptop system and even the cloud instances and may manifest in real-time hardware.
Another Autonomous trucking start-up ‘Embark’ has invested in software that could test the vehicles and the components offline which allowed them to test the vehicle control system including the brakes, accelerators, steering wheel, and other significant parts. All the parameters were checked with an extreme degree of command inputs.
Nvidia which is a leading graphic processor and AI development organization is also helping some big companies like Toyota with its Virtual reality autonomous vehicle simulator known as ‘Nvidia Drive Constellation’. Drive Constellation uses high fidelity simulation to create safer, more cost-effective, and more scalable simulators for the training of autonomous vehicles. It uses the computing horsepower of two different servers to deliver a cloud-based computing platform, capable of generating billions of qualified miles of autonomous vehicle testing. Powerful GPUs generate photoreal data streams that create a wide range of testing environments and scenarios.
The main focus of concern is the pandemic and how these organizations will tackle such situations. Scale AI is another company that is helping numerous automation industries like Lyft, Toyota, Nuro, Embark, and Aurora in detailed labeling of the collected old data. This detailed labeling is achieved via ‘Point Cloud Segmentation’. For the newcomers point cloud segmentation is the process of classifying point clouds into multiple homogeneous regions. The points in the same region will have the same properties. The segmentation is challenging because of high redundancy, uneven sampling density, and also it lacks the explicit structure of point cloud data. This method is used to encode the correspondence of each and every point on the 3D mapping and hence is able to differentiate between the pedestrians, stop signs, lanes, footpaths, traffic lights, other vehicles, etc.
The Scale AI team is also encoding a 3D map for simulation using the ‘Gaze Detection System’. This will allow even to encode the direction of the gaze of the pedestrian, any cyclist, or the driver of other vehicles so predict their movement i.e. whether the pedestrian is going to cross the road or not. The development of this technology will allow the AI to guess the next move of the pedestrian or the driver allowing the least possibility of an accident.
The pandemic has not just made us adapt to the situation but has also allowed the researchers to make the technology to adapt to this dreadful situation. Such developments in the field of technology show the constant endeavor of mankind to escalate in the field of constructing a better society. The world is ready to be majorly automated in the coming years. The autonomous vehicle industries are rising exponentially. Even the pandemic and the resulting lockdown aren’t enough to curb the rising innovation. Soon the self-driving vehicles will be on the streets.
Photo by Giorgio Trovato on Unsplash
World’s best graphics processing company Nvidia is developing some futuristic AIs. Nvidia has been working on several Artificial Intelligence projects and has been carrying out major research in this field for a long time now. This time the company has extended its boundaries and has developed an astonishing Artificial Intelligence which recreated the retro classic Japanese game Pac-Man on its 40th anniversary from scratch by just watching the gameplay. The name of this AI is NVIDIA GameGan which is basically a Neural Game Engine.
How does this AI recreate a game by just watching the gameplay?
Well, the researchers have said that the basic principle used here is ‘Model-Based Learning’ in which the entire logic of the game including the controller inputs is stored on the neural networks from where this information is further developed into that game from scratch and frame by frame. Hence there is no rendering of coding or images required for the AI.
The AI though was not able to re-capture the images of the ‘Ghost characters’ of the game which are meant to chase and kill Pac-Man and hence resulted in a blurry image of these characters. This happened because the movement of the Ghosts in the game is the result of a complex algorithm and each of these ghosts is programmed with these complex and unique algorithms to determine their movement across the maze. Albeit the programming algorithms of the Pac-Man are a lot less complex than that of the ghosts as its movements are fixed to the controller inputs.
The basic architecture of the GameGan is divided into three parts which are the Dynamic Engine, The Rendering Model, and The Memory Storage or the ‘memory module’, and this architecture works in two halves. In the first half, the Neural Game Engine or the GameGan replicates or tries to copy the input data visually through the game, and in the second half, this set of logical input data is then compared with the data of the original game. If the generated data matches the original data of the source, the game is then generated by the AI else if it does not match the data is rejected and again sent in the process of correct data generation. This goes in a perpetual loop until the data matches accurately.
Sanja Fidler who is Nvidia’s director of AI in Toronto Research Labs told that GameGan had to be trained on 50,000 episodes of Pac-Man to generate that fully functional game which requires no underlying game engine. Since it was impossible for a human being to generate this humongous data of 50,000 episodes the help of an AI agent was taken to generate the data. The initial challenges included the invincibility of the AI agent as it was so good in the game that it hardly died in any round. This resulted in the creation of the game in which the Ghosts just followed the Pac-Man arbitrarily and are unable to reach it ever.
The memory storage or the ‘memory module’ of the GameGan AI generated a new aspect to it according to the researchers. The memory module allowed the storage of the internal map of this game world which is actually the static element of the game rather than the dynamic elements like the Pac-Man and the Ghosts. This will allow the AI to create new maps, levels, and worlds by itself without any human interventions. Thereby the gamers and the users will be gifted with uncountable new maps and game worlds. This will enhance the dynamics of the game exponentially.
Advantages and Future aspects of the GameGan AI
There have been several advantages predicted by the researchers as well as the gamers themselves on these new characteristics of the AI.
The biggest advantage of this AI will be the speeding of game development and creation. The creators will need not have to code from scratch for new layouts and levels of a certain game, the AI will eventually create new game worlds visually.
The AI will simplify the development and creation of new simulation systems for training autonomous machines. This will allow the AI to learn the rules of the actual working environment even before interacting with any other real object of the world.
By just visual data in the near future, the machines will be able to drive a car, go for grocery shopping, play a sport, learn laws of physics in the real world, etc. which will be a humongous achievement for development purposes.
The AI will help in a very easy transfer of the game from a particular Operating System to another Operating System. Hence the game will not have to be developed again with the codes for various Operating Systems, the AI will automatically do it.
The game can be compressed by the AI in the memory module of the Neural links or the Neural Networks and can be stored there permanently allowing the continuous development of the statics and dynamics of the game totally by the AI.
This AI in the near future will allow automated machines to outperform humans in dangerous and catastrophic situations carrying out experiments and rescue operations.
Experiments, researches, implementation, and development on new characteristic AIs are being performed by various other scientists, researchers, and engineers from all across the world. The new age of machines and Artificial Intelligence is commencing and soon all of us will be equipped or will be aided with various efficient AI robots with different capabilities which will curb time wastage and will accelerate human development to novel extents. GameGan is an exquisite example of development in machine learning and deep learning of the computers where the possibility of developing a machine visually is now made real. This AI will be used extensively in the near future to generate new simulators without a set of codes to ponder over and will allow it by just training the complex neural networks. We hope to watch new and amazing AIs by Nvidia and other organizations.
Leverage machine learning in your organization with Tipstat. Contact us here
Interested in working with Tipstat on AI? Check out our open positions here
Musical AI is fast evolving. Many independent organizations are coming up with impressive AI solutions to implement machine learning as a tool in musical workflows, for example, OpenAI, an independent research organization which aims at developing “friendly AI,” has delivered many impressive AI tools over the last few years. The organization, for example, has created a language generating tool called GPT has recently added Jukebox.
Jukebox: An AI that generates raw audio of genre-specific songs might not be the most practical application of AI and machine learning but given that it can create new music just by providing genre and lyrics as input is quite astonishing. Jukebox can also rewrite existing music; generate songs based on samples; and even do covers of famous artists. Samples are offered in the voice of Elvis Presley, Katy Perry, Frank Sinatra, and Bruno Mars ( at jukebox.openai.com). The results are nowhere near realism. But listening to ‘Katy Perry’ or ‘Frank Sinatra’ in different styles shows that the Jukebox is capturing some aspects of their music styles. As OpenAI specified on their blog “ the results researchers got were impressive; there are recognizable chords and melodies and words”.
But how did OpenAI do it?
OpenAI’s engineers made use of Artificial Neural Networks(ANN) which are essentially machine learning algorithms used to identify patterns in images and languages. Similarly, it is used to identify patterns in audio, millions of songs, and their metadata is passed through these neural network algorithms from which new music is created. In other words, the engineers have provided the AI computer with a huge database of songs and then ordered the computer to create new tracks that follow the same patterns and beats found in the songs database given to them.
Creating tracks that resemble the provided samples requires a lot of computing power. The AI computer has to go intensive training with large amounts of data. According to the OpenAI team, to train the model, the team had created a new dataset of 1.2 million songs, from which 600,000 of them in English, paired with their lyrics and data which includes genre, artist, and year of the songs.
Technical Details of Training Model – For those of you who are into ML engineering. Others can skip of course 🙂
▪ The model on which the AI was trained on had two million variables running on more than 250 graphic processing units for three days.
▪ The sampling sub-model which adds loops and transitions to track was also composed of one billion parameters and was trained on about 120 graphic processing units for many weeks.
▪ The top hierarchy of the output track has more than five billion parameters and is trained on more than 500 GPUs.
▪ The lyrics, which are being outputted from Jukebox, had also gone through an intensive training of two weeks.
▪ The model is trained on 32-bit, 44.1 kHz raw audio using Vector Quantized Variational Auto Encoder (VQ-VAE). since generating music from other audio formats takes more time because of the long sequences.
The training model and code are available in the openai/jukebox GitHub repo.
Limitations of Training model
There was a significant gap between music created by the Jukebox neural network and human-created music. Jukebox created songs showed similarity with plenty of features such as coherence, solos, and older instrument patterns, but they lacked choruses and structure which is repeated in a song. Sampling of the tracks produced noise which degraded the overall quality of the track. Performance of the training model was also not up to the par, On average it takes about 9 hours to fully output a minute of audio using training models, which can be a bottleneck when rendering and delivering audio samples on cloud platforms. Lastly, the model only produced English tracks since it is only being provided with a database of English songs. Samples and lyrics in other languages are not yet trained on the platform.
Legal and Ethical issues with such AI Models
Jukebox has many other issues when it comes to delivering a sample from the provided input. First, the copyright issue around training an AI on already recorded music will always require a copy of that track. Although this type of training is considered ‘fair use’. The second issue is the output, and this one can have serious consequences. Jukebox produces new tracks from existing metadata that are the lyrics and genre. What if those lyrics are protected by copyright?, What if the music is ‘in another style of the genre’ created a different appearance of the original singer in front of their audiences.
In many areas of the music community, there can be many issues from the Jukebox platform, whether it’s on the basis of copyright infringement and/or decreasing the value of human-made music. With the issues of the Jukebox platform also comes the benefits from it music creators will be excited and curious about Jukebox: how they can implement this creative AI tech in their workflow.
All these opinions and questions are completely natural. They always come at the cost of the latest tech innovations, Is AI good or bad for humans? Well, It all depends. So the best option is to explore and understand the possibilities of what Jukebox technology is really capable of. Understanding the technology will not only help in forming reasoned opinions while decreasing real-time issues with the platform.
Overall, Jukebox represents a step forward in improving the musical quality of samples with new lyrics, thus providing the creators with more freedom to create music over the generations. The ability to change the output on the basis of artist, genre, and lyrics is one of the biggest strengths of Jukebox.
Also, This is not the first music AI tool that San Francisco-based AI laboratory has delivered. OpenAI has been working on generating automatic audio samples conditioned on different kinds of metadata for many years now. Last year, they brought MuseNet, which was trained with a large amount of MIDI data using deep neural networks to compose new tracks with different instruments and genres from country to pop to rock.
Looking to Leverage AI in your organization? Reach out to us here
Interested in joining our ML Team? Please check out the open positions here
It was only a few months ago that the healthcare industry around the world was undeterred. It was unswayed by any external factors, lest by an invisible virus. This one deadly virus, however, has changed how we look at health care now.
Apart from economies, service sectors, and other industries, healthcare been the hardest hit and it has been at the forefront of this battle.
Social distancing, isolation, quarantine, and self-sufficiency have become the new norm of the day. Countries are shifting focus from imports and exports towards becoming self-sufficient.
Work cultures are changing. Companies are not only allowing professionals to work from home but in fact office parties have gone online as well.
Considering all of this, industries across the spectrum should mutate, just like the virus that is forcing us to do.
Since it looks like the virus is here to stay for some time until a vaccine has been developed, the only way is to build a life around it. Building business models that are COVID-19 centric are one way to go.
This would provide long term solutions instead of mere shortcuts. This would also provide a framework for times to come.
But, not everything is as hunky-dory as it sounds. Changing business models overnight and adjusting to the post corona scenario is easier said than done. Especially in an industry as regulated as healthcare.
The one solution that healthcare has come up with to deal with these uncertain times is by going digital. This is being done through telemedicine. Some companies doing really well in this arena are Beam, GYANT, and Hale Health among so many others.
What is Telemedicine?
Ever since the turn of the century, every industry has gone through structural changes. The invention of the internet and the advent of technology are the main sources behind such changes.
And, it hasn’t left the healthcare industry untouched as well. The result of that structural change in healthcare is telemedicine.
Telemedicine is a term used in the healthcare industry which refers to conducting medical activities and healthcare-related services using electronic information and telecommunication technologies.
Even though the usage of telemedicine as of now is relatively less, the industry is projected to grow to $130 billion by 2025.
Given the changing dynamics in the world, thanks to COVID-19, it looks like the projection of $130 billion will reach us far before 2025.
Instead of asking what is telemedicine, the real question should be why telemedicine?
Telemedicine can help politicians fulfill their dream of an affordable healthcare system that they promise almost every year. The importance of telemedicine today lies in the spirit of social distancing and hygiene. For one thing, isolation can be maintained between doctors and patients, not just in spirit but in reality. An estimated $2.9 trillion is spent by healthcare in the USA alone. Almost $250 billion of it goes into unnecessary spending. With a little up-gradation of technological infrastructure, healthcare businesses can save plenty. This can lead to a lot of savings in employee housing and office maintenance as well.A proper and an error-free track record of patient’s data can be maintained with the help of different software integrated with this technology. Service Quality is the most important factor in a service industry like healthcare. The concept of telemedicine may also elevate service quality in this sector. Telemedicine also offers easy accessibility to the patients.
Patients are in general scared of going to hospitals and even more so in these Covid times. Telemedicine could address this shortcoming as the need to go to the hospital gets eliminated completely.
Patients and doctors consult over video calls and the required tests are prescribed. This has also led to increased efficiency in sample collections.
Samples are being collected from the doorstep and the reports are getting delivered online. Think about the amount of paper, fuel, and time that this will also save in turn!
Companies will also return to strategic stockpiling of pharmaceutical commodities as well as essential goods. Strategic stockpiling was prevalent during the Cold War and immediately after the oil shock in 1973.
Companies eventually gave up this practice due to the heavy costs incurred in holding such large amounts of excess inventory. However, it will be done again, considering the shortage of goods and the losses incurred at the onset of the pandemic.
Once scientists find a cure to this deadly virus, the focus will shift to vaccine research. Funds are expected to flow in this direction which was otherwise just funded by philanthropists. Vaccine research will become mainstream as experts predict that this may just be the beginning.
Apart from all these changes, post the pandemic, it is high time that a Universal Health Care Scheme is brought in place. The world should swiftly move in this direction to make healthcare easily accessible to all.
Let alone the poorer countries, a superpower like US was unable to efficiently manage the situation for its population. Having a UHC scheme has been in discussions in the UN General Assembly and the WHO but for some reason it received all the due attention.
Post this pandemic, you might also walk into your doctor’s cabin and find his or her new assistant to be a robot. This robot will have all your medical history ready at it fingertips.
This integration of human intelligence with artificial intelligence and machine learning is meant to enrich the consultation.
Doctors could directly access patient records and medication history instead of the patient telling it all over again.
This reduces the time that gets wasted and instead the patient can directly jump to explaining his problem. It could further be improved to reduce the strain that the healthcare system currently faces.
Now, what are some of the challenges that the healthcare industry faces when it comes to implementing all that has been discussed above?
Lack of training and infrastructure is the immediate challenge in adopting telemedicine, telehealth as well as the integration of AI, ML and human intelligence. This might not be a long term problem, but our healthcare personnel aren’t quite trained for the required digital infrastructure for telehealth. The next is the lack of human touch. Although telemedicine is extremely accessible and affordable, many patients fear the lack of human sensitivity. There is a lack of trust in technology. People may not get a ‘feel’ of their regular doctor’s appointment.
Once most of healthcare goes digital, blockchain technology can be used for functions such as record management, healthcare surveillance and monitoring epidemics.
Since information once entered cannot be manipulated, transparency, and patient data security can be ensured by implementing blockchain into the healthcare system on a large scale.
But, as mentioned in the beginning of this article, change doesn’t happen overnight. It takes time for people to adapt.
The future of the healthcare industry lies in telemedicine. While there are both pros and cons associated with it, the need is for proper regulations and COVID centric as well as long term policies.
Once done, there is no doubt that we can create a universally accessible and affordable healthcare system with no boundaries for anyone.