Selfbits GmbH https://selfbits.de/en/ Wir messen den Herzschlag Ihrer Produktion. Wed, 04 Mar 2026 09:50:34 +0000 en-US hourly 1 https://selfbits.de/wp-content/uploads/2018/08/cropped-Logo_new_v4_SB_only@16x_passepartout-32x32.png Selfbits GmbH https://selfbits.de/en/ 32 32 From rearview mirror to radar: Key performance indicator management that makes production truly controllable https://selfbits.de/en/learn/from-rearview-mirror-to-radar-key-performance-indicator-management-that-makes-production-truly-controllable/ Wed, 04 Mar 2026 09:50:33 +0000 https://selfbits.de/uncategorized/from-rearview-mirror-to-radar-key-performance-indicator-management-that-makes-production-truly-controllable/ Make key figures from data silos usable in everyday production. This article shows which KPIs really control shift operations in addition to OEE—from WIP to setup times to first-time right.

Der Beitrag From rearview mirror to radar: Key performance indicator management that makes production truly controllable erschien zuerst auf Selfbits GmbH.

]]>

From rearview mirror to radar: Key performance indicator management that makes production truly controllable

Mit der Digitalisierung steigt die Datenmenge deutlich, dennoch entsteht daraus nicht automatisch Handlungsfähigkeit. Entscheidend ist, aus Daten wenige Signale abzuleiten, die im Schichtbetrieb konkret Entscheidungen unterstützen. Wenn Auswertungen erst verzögert in Reports erscheinen, werden Abweichungen häufig erst dann sichtbar, wenn die Wirkung bereits in Terminen, Beständen oder Nacharbeit angekommen ist.

Image: Sample illustration

OEE as a solid basis with clear boundaries

OEE is a widely used metric because it combines availability, performance, and quality in a single figure, enabling an initial classification. In practice, this structure already helps to focus on the causes, because an availability problem typically points to downtime and malfunctions, a performance problem tends to point to speed losses, process instability, or deviations from the target, and a quality problem points to scrap, rework, and process parameters.

At the same time, OEE is often used as an aggregate target value and is susceptible to definition effects. If, for example, break times, fault limits, or cycle rates are defined in such a way that the figure “fits,” this creates a good picture on paper without improving delivery capability, flexibility, or actual flow. As a result, OEE remains valuable as a local view of a plant, but is often insufficient for operational control along the entire process, especially when bottlenecks and intermediate stocks between stations determine performance.

Key figures with direct leverage in everyday life

For operational control, a few clearly defined key performance indicators supplement OEE because they identify bottlenecks at an early stage and point directly to specific measures in shift operation.

Work in Process and Flow Efficiency

High inventories between workstations are a strong indication of asynchronous processes, for example due to fluctuating cycle times, disruptions, or a lack of coordination in the sequence.

If WIP increases visibly, it is worth checking whether material releases, priorities, and bottleneck capacities match the actual flow. The control benefit arises because individual machines should not “run well” in isolation, but rather the entire process should run smoothly.

Setup time reality, plan versus actual

In many companies, planned values are based on standard times that no longer correspond to the current batch size structure and actual processes.

If an MES regularly shows that certain setup processes take longer than calculated, this provides a clear starting point for improvements, for example through better setup preparation, standardized processes, or targeted training.

This means that a deviation does not become a reporting issue, but rather a specific CIP task that can be linked to a process step.

First Time Right

First Time Right measures how many units go through the process without rework.

If this value drops, additional loops arise in rework and testing, which tie up capacity and undermine planning in practice.

Control is achieved when FTR is promptly traced back to the station where it originated so that causes such as material deviations, process parameters, or work instructions can be corrected where they arise.

Conclusion and practical requirements for dashboards

A good key performance indicator system is not improved by its scope, but by its clarity. Dashboards should be tailored so that key performance indicators can be influenced and any deviation triggers a specific response. Otherwise, it remains mere statistics, even if the presentation looks professional.

This is how to do digitalization right!

Selfbits supports you in bringing key figures from data silos into everyday production and preparing them effectively for shift operation. In a non-binding initial consultation, we clarify which key figures have the greatest leverage and how they can be operationalized.

Der Beitrag From rearview mirror to radar: Key performance indicator management that makes production truly controllable erschien zuerst auf Selfbits GmbH.

]]>
5 steps to your digitalization roadmap https://selfbits.de/en/learn/digital-production-i40/5-steps-to-your-digitalization-roadmap/ Wed, 25 Feb 2026 16:38:47 +0000 https://selfbits.de/uncategorized/5-steps-to-your-digitalization-roadmap/ In many companies, production is technically well positioned. Machines and automation are often at a high level. At the same time, information about orders, quality, and malfunctions is still surprisingly often recorded manually, maintained in spreadsheets, or passed on informally at handover points. This is precisely where friction losses arise, which cost time and, ultimately, […]

Der Beitrag 5 steps to your digitalization roadmap erschien zuerst auf Selfbits GmbH.

]]>

5 steps to your digitalization roadmap

In many companies, production is technically well positioned. Machines and automation are often at a high level. At the same time, information about orders, quality, and malfunctions is still surprisingly often recorded manually, maintained in spreadsheets, or passed on informally at handover points. This is precisely where friction losses arise, which cost time and, ultimately, money.

A digitalization roadmap brings order to this topic. It helps you take the right steps in the right order without getting lost in projects that sound modern but change little in everyday life. It is crucial that the roadmap is not created as a theoretical target vision, but rather based on practical experience. Five steps are often sufficient for this.

Image: Sample illustration

1. Accurately record the status quo

The first step is to take an honest inventory. Not on slides, but where processes actually take place. It’s about understanding how information is created, how it is used, and where it gets lost or recorded multiple times along the way. Often, it’s small breaks between areas, layers, or systems that have a big impact later on.

It makes sense to trace typical processes in their entirety. From order approval to feedback, from quality control to documentation. If it becomes apparent that decisions are based on assumptions or that data is only available at a very late stage, this is a clear signal. These observations are not a criticism, but rather the basis for meaningful prioritization.

2. Define and prioritize goals

Digitalization is not an end in itself. That’s why it should be clear early on what specific improvements are to be achieved. These could include greater adherence to deadlines, less downtime, better traceability, or faster and more secure documentation. It is important that goals are formulated in such a way that they are understood within the company and can be verified on a daily basis.

The next step is about focus. If too many topics are started at once, attention is divided and implementation becomes difficult. A viable roadmap focuses first on a few points that bring tangible benefits and do not overwhelm the organization. This initial stage creates the momentum for further expansion.

3. Define the technical basis and system architecture

Many digitization initiatives fail not because of the idea itself, but because of an unclear system landscape. If data from machines, systems, and manual entries does not converge reliably, a patchwork quilt quickly emerges. That is why it is worth asking early on how data sources will be connected and how applications will be built on top of them.

In practice, a modular approach has proven successful. A stable data connection and clear data flow form the basis. Applications can then be built on top of this that each fulfill a clear purpose, for example, for fault cause recording, traceability, or digital work instructions. This approach reduces dependencies, facilitates expansions, and prevents every adjustment from becoming a major project.

4. Implementation with a view to the shop floor

Whether a system is used in the long term is determined by its daily use. That is why the roadmap must consistently take the user’s perspective into account. It is not enough for a solution to be technically correct. It must also be quick to use, logically structured, and fit into the workflow.

Short feedback loops are crucial here. Initial versions should be tested early on, ideally in real-world operations. Feedback from production often immediately reveals where inputs take too long, where terms are unclear, or where a step in the process is missing. If adjustments are implemented promptly, acceptance increases noticeably. Digitalization is then perceived not as an additional obligation, but as a support.

5. Pilot, evaluate, roll out

A roadmap becomes resilient when it is implemented in stages. A pilot area is ideal for this because it provides quick insights and limits risks. It is important to define how success will be measured before the start. Afterwards, a conscious evaluation is made of what works, what needs to be improved, and what standards should apply to the rollout.

Expansion should only take place once the pilot is running smoothly. This ensures a controllable expansion with clear learning effects. This approach keeps the project manageable, prevents overload, and ensures that each expansion stage is built on a functioning foundation.

What happens next

When these five steps are neatly built upon one another, the topic of digitalization becomes something tangible. Not as a large overall project that piles up over months, but as a series of decisions that are comprehensible in the company and improve everyday life step by step. This keeps the effort manageable, the effect visible, and allows the next expansion stages to be planned on a stable foundation.

This is how to do digitalization right!

Book a free initial consultation today. We will discuss your current situation, prioritize the most important use cases, and outline the next steps for a practical digitalization roadmap.

Der Beitrag 5 steps to your digitalization roadmap erschien zuerst auf Selfbits GmbH.

]]>
Make-or-buy in manufacturing IT: Why supposedly cheaper in-house development often results in high follow-up costs https://selfbits.de/en/learn/digital-production-i40/make-or-buy-in-manufacturing-it-why-supposedly-cheaper-in-house-development-often-results-in-high-follow-up-costs/ Wed, 25 Feb 2026 16:34:33 +0000 https://selfbits.de/uncategorized/make-or-buy-in-manufacturing-it-why-supposedly-cheaper-in-house-development-often-results-in-high-follow-up-costs/ In many manufacturing companies, Excel is considered a particularly flexible and cost-effective solution for evaluations and simple process support. Licenses are available, the tools are familiar, and BI tools can be used to create meaningful visualizations. This quickly gives the impression that a homemade solution could completely replace an MES. However, this often reduces the […]

Der Beitrag Make-or-buy in manufacturing IT: Why supposedly cheaper in-house development often results in high follow-up costs erschien zuerst auf Selfbits GmbH.

]]>

Make-or-buy in manufacturing IT: Why supposedly cheaper in-house development often results in high follow-up costs

In many manufacturing companies, Excel is considered a particularly flexible and cost-effective solution for evaluations and simple process support. Licenses are available, the tools are familiar, and BI tools can be used to create meaningful visualizations. This quickly gives the impression that a homemade solution could completely replace an MES.

However, this often reduces the decision to the initial effort involved. In fact, it is a strategic choice between short-term adaptability and long-term process reliability, especially when solutions transition into regular operation and grow beyond individual use cases.

Image: Sample illustration

The risks of shadow IT growth

Homegrown solutions often arise from a understandable impulse. A specific problem arises, a quick solution is needed, and a capable person implements a tool that initially appears to work reliably. Over time, however, this often develops into shadow IT that is difficult to control because the solution is gradually expanded without considering maintainability, governance, and scalability from the outset.

A typical weak point is dependence on individuals. Homegrown systems are often closely linked to the knowledge and availability of individual employees, for example when complex VBA logic or special data processing methods grow organically over many years. If this person is absent or leaves the company, a supposedly inexpensive solution becomes an operational risk because changes, troubleshooting, and further development are not reliably secured.

Added to this is the integration gap. Excel and BI tools are very good at preparing and visualizing data, but they are only of limited use when it comes to consistently recording data, managing processes, and mapping traceability in an audit-proof manner. In practice, machine data and manual entries must converge, and a reliable data basis is needed for control and verification. If this integration is not handled properly, data silos and media breaks arise, which cost time in everyday work and undermine data quality.

What an integrated platform typically does

One common objection to choosing a platform is that standard software cannot adequately map specific processes. This argument falls short when modern platform approaches are understood not as rigid products, but as adaptable foundations.

A platform approach usually provides stable basic components, including data storage, interfaces, user and rights management, and security mechanisms. Building on this foundation, applications for worker guidance, quality control, or feedback can be configured and developed in line with processes without having to rebuild the basic infrastructure each time.

Decision criteria for practice

Criteria directly related to everyday production help to classify products objectively.

Audit security and traceability

When there are documentation requirements, such as who recorded which measurements at which station, spreadsheet solutions quickly reach their limits. What is crucial is a traceable, tamper-proof history of transactions that can also be reliably explained under audit conditions.

Scalability in rollout

A solution for an isolated area may work in the short term. However, as soon as multiple lines, shifts, or locations need to be consistently served, the effort and susceptibility to errors increase significantly, especially when it comes to versioning, permissions, interface maintenance, and uniform processes. Platforms are typically designed to support such rollouts in a structured manner.

Real-time capability in control systems

Many evaluations are delayed and are primarily suitable for analysis. If operational interventions are necessary during the current shift, data streams, status logic, and alarms become more important than downstream reports. Here, the architecture determines whether control is actually possible in real time or whether deviations only become visible once they have already taken effect.

Conclusion

In-house development remains a sensible option for prototypes, one-off analyses, or clearly defined, non-business-critical sub-processes. However, as soon as solutions are operated on a permanent basis, need to be scaled, or have to meet compliance requirements, the cost and risk structure shifts. Maintenance, troubleshooting, dependencies, and a lack of future security then often outweigh the license costs saved.

Are you interested in an MES?

When weighing up the pros and cons of in-house development and a platform, it is worth taking a structured look at process criticality, rollout prospects, and verification requirements. Selfbits helps you evaluate these criteria in line with real manufacturing processes and derive a robust implementation strategy.

Der Beitrag Make-or-buy in manufacturing IT: Why supposedly cheaper in-house development often results in high follow-up costs erschien zuerst auf Selfbits GmbH.

]]>
Why digitization projects in manufacturing fail: 10 typical reasons from small and medium-sized enterprises https://selfbits.de/en/learn/why-digitization-projects-in-manufacturing-fail-10-typical-reasons-from-small-and-medium-sized-enterprises/ Mon, 26 Jan 2026 13:56:37 +0000 https://selfbits.de/uncategorized/why-digitization-projects-in-manufacturing-fail-10-typical-reasons-from-small-and-medium-sized-enterprises/ Many medium-sized manufacturers are currently under considerable pressure to act. Increasing competition, higher energy prices, and growing regulatory requirements are making it increasingly difficult to manage purely analog manufacturing efficiently and reliably. The switch to digital production is therefore becoming a practical necessity for many. In implementation, an MES (Manufacturing Execution System) often forms the […]

Der Beitrag Why digitization projects in manufacturing fail: 10 typical reasons from small and medium-sized enterprises erschien zuerst auf Selfbits GmbH.

]]>

Why digitization projects in manufacturing fail: 10 typical reasons from small and medium-sized enterprises

Many medium-sized manufacturers are currently under considerable pressure to act. Increasing competition, higher energy prices, and growing regulatory requirements are making it increasingly difficult to manage purely analog manufacturing efficiently and reliably. The switch to digital production is therefore becoming a practical necessity for many. In implementation, an MES (Manufacturing Execution System) often forms the core because it connects operational manufacturing with planning and control and makes data from the shop floor usable.

Despite this necessity, digitization projects often fail in reality. In very rare cases, this is because a solution is fundamentally unsuitable from a technical standpoint. More often than not, the prerequisites for a smooth introduction are lacking. These include a clear target vision, stable processes, reliable data, and clear responsibilities. Equally important is an approach that fits in with the day-to-day business of small and medium-sized enterprises.

This article describes ten typical patterns that cause digitization projects in manufacturing to fall by the wayside. The focus is on topics related to MES, data acquisition, and shop floor transparency. For each point, you will find countermeasures that have proven themselves in practice.

Image: Example illustration “Digitization project”

Classification: What does digitization mean in concrete terms in production?

In many medium-sized companies, digitalization primarily means reliably recording production data and making orders and capacities transparent. Based on this, key figures can be calculated consistently and decisions in shop floor management can be made on the basis of reliable data.

An integrated MES is the central link for this. It is used to monitor, control, and optimize production in real time. At the same time, it connects ERP and the manufacturing level so that planning, feedback, and evaluations are interlinked without media discontinuity.

1. There is no target vision, only a list of desired functions.

Typical pattern: The project starts with the goal of becoming more digital and ends up with a long list of requirements. There is a lack of prioritization, a clear path to benefits, and a robust scope.

Countermeasures:

  • Formulate a target vision that contributes to a few measurable results. These could be, for example, better traceability or faster feedback.
  • Prioritize the most important use cases and derive a roadmap from them that fits your day-to-day business.
  • Before selecting the tool, clarify which management decisions need to be made better in the future and which data is really needed for this.

2. Digitalization is managed as an IT project, not as a production program.

When responsibility lies entirely with IT, operational ownership is often lacking. The technical implementation is then flawless, but the benefits do not reach the shop floor in everyday use.

Countermeasures:

  • Anchor technical project management in production and give it clear decision-making authority.
  • Set up IT as an enabler: for architecture, security, operations, and interfaces.
  • Define lean governance with roles, decision-making paths, escalation, and a fixed cycle for steering and approvals.

3. Processes are not stable, yet they are being digitized

Digitization reflects reality. If reality consists of special cases, the system quickly becomes a collection of exceptions. This is often evident in work schedules and feedback logic. At the latest, it becomes noticeable in everyday life during shift handovers, setup processes, or in quality assurance.

Countermeasures:

  • First create standards and then map them digitally.
  • Manage exceptions consciously: Define what is standard, which cases are considered exceptions, and who decides on them.
  • Use the logic of lean and shop floor management to ensure that transparency leads to root cause analysis and improvement, rather than just more reporting.

4. The scope is too broad and the rollout is happening too soon

In medium-sized companies, projects run parallel to day-to-day business. If several areas are to go live at the same time, this can lead to overload, quality problems, and declining acceptance.

Countermeasures:

  • Start with a clearly defined area, stabilize the solution there, and only then scale further.
  • Take a modular, step-by-step approach. This way, benefits become apparent early on and expansions remain manageable.

5. Data quality and master data are underestimated

An MES is only as good as the data that flows in and out of it. Unclear or inconsistent master data leads to incorrect planning, erroneous feedback, and unreliable evaluations. This creates mistrust, and mistrust blocks its use on the shop floor.

Countermeasures:

  • Define data ownership: Who is responsible for which master data and who is authorized to approve changes?
  • Establish validation rules and approvals before data is used productively.
  • Plan data maintenance as an ongoing task rather than a one-time migration.

6. Machine data and shop floor feedback are not reliably received

Without reliable MDE and BDE data, there can be no real-time transparency. Key figures are discussed, but not believed.

Countermeasures:

  • Define an integration strategy for machines and consider retrofitting where necessary.
  • First define data points in technical terms and only then link them technically. This includes statuses, cycles, reasons for downtime, and quality information.
  • Test early and realistically whether data quality and granularity are sufficient before building key figures and shop floor routines on them.

7. ERP integration and interfaces are planned too late

If MES and ERP are not properly integrated, this results in duplicate maintenance, media breaks, and correction loops. This immediately becomes apparent in everyday work due to the additional effort required.

Countermeasures:

  • Treat integration as a core task and plan for it early in the project.
  • Harmonize master data, order information, and feedback, including clear booking logic.
  • Define responsibilities between production, IT, and ERP operations in a binding manner so that interfaces remain stable in everyday use.

8. Key figures are introduced without their definition and purpose being clear.

Key performance indicators are management tools. If definitions are inconsistent, the team will discuss numbers instead of causes. This is particularly critical for production-related KPIs such as OEE, because they combine time, quantity, and quality in a single indicator.

Countermeasures:

  • Create a KPI glossary that includes definitions, data sources, calculation logic, responsible parties, and purpose.
  • Select key figures that guide action and can be directly translated into measures.
  • Ensure that data collection and KPI logic are aligned so that dashboards remain comprehensible and inspire confidence.

9. Change management is reduced to a briefing

A new system changes routines. Suddenly, reports are no longer submitted on paper, downtime must be justified, and quality data must be recorded in a structured manner. Without acceptance, the system is quickly perceived as a disruptive factor.

Countermeasures:

  • Qualify based on roles, because workers, shift supervisors, work planners, and production managers require different content.
  • Make the benefits visible in everyday life, for example through fewer queries, better shift handovers, and faster problem detection.
  • Take leadership on the shop floor seriously: discuss key figures regularly and follow up on measures consistently.

10. Operation and further development are unclear after go-live

Many projects end organizationally with the go-live. However, the benefits only arise during stable operation and continuous development. Without defined support, release processes, and clear responsibilities, teams gradually revert to Excel and paper.

Countermeasures:

  • Define an operating model with support, monitoring, roles and rights, and clear escalation paths.
  • Plan release processes and responsibilities in such a way that further development continues to take place after go-live.

Conclusion

Digitalization in manufacturing is not an end in itself. It succeeds when project work and everyday production are brought together. This starts with a clear vision and stable processes. Added to this are reliable data, well-planned interfaces, and clear responsibilities. Those who lay these foundations significantly reduce project risks and establish the basis on which an MES and data-based control can function permanently.

A pragmatic approach that suits small and medium-sized businesses.

Small and medium-sized businesses benefit from clean execution, clear accountability, and a predictable cycle. An approach that first creates clarity and then delivers step by step has proven successful:

Record the system landscape

and processes in a structured manner and identify the most important bottlenecks.

Define and prioritize goals

and requirements together with specialist departments and management.

Assess potential

and create a roadmap with specific work packages and responsibilities.

Implement in stages

and expand the system in a modular fashion as soon as the previous step is running smoothly.

Embed operation

and continuous improvement in the organization so that the solution is maintained and further developed on a daily basis.

Ready for digital manufacturing?
Let’s take the first step together.

Der Beitrag Why digitization projects in manufacturing fail: 10 typical reasons from small and medium-sized enterprises erschien zuerst auf Selfbits GmbH.

]]>
What is SQCDP? https://selfbits.de/en/learn/digital-production-i40/what-is-sqcdp/ Mon, 08 Dec 2025 12:17:59 +0000 https://selfbits.de/uncategorized/what-is-sqcdp/ In just a few minutes, you will gain an overview of what the term stands for, how SQCDP is applied, and what advantages it offers.

Der Beitrag What is SQCDP? erschien zuerst auf Selfbits GmbH.

]]>

What is SQCDP?

In an increasingly competitive environment, it is crucial for manufacturing companies to create transparency about the processes on the shop floor in order to be able to make informed decisions quickly.
A proven approach that provides clear guidance for managers and teams is SQCDP.
In just a few minutes, you will gain an overview of what the term stands for, how SQCDP is applied, and what benefits it offers.

Image: Floating dashboards in a production hall.

What are SQCDP boards?

SQCDP boards are visual management tools for daily management at the point of value creation. They bundle key performance dimensions (safety, quality, cost, delivery, people) on a standardized interface, structure 15-minute shop floor meetings, and embed problem solving in day-to-day business. When implemented correctly, they increase transparency, accountability, and responsiveness.

Green means “OK”
Yellow signals “action required”
Red indicates “critical”

In addition, many boards contain additional key figures to make developments even clearer. The respective teams independently define which KPIs are relevant for their process and which target values they want to achieve.

The 5 pillars

1. Security

This dimension covers all safety-related aspects of manufacturing. Key figures on accidents, near misses, or safety-critical observations help to identify risks at an early stage and eliminate them consistently. The goal is a working environment free of avoidable hazards.

2. Quality

Quality is a key driver of customer satisfaction. The board visualizes deviations and quality indicators such as scrap rate, first-pass yield, or complaints, thereby laying the foundation for rapid intervention.

3. Cost

This pillar focuses on all relevant cost factors in production. Material costs, energy consumption, and deviations from cost forecasts are made transparent so that targeted countermeasures can be taken.

4. Delivery

The Delivery section shows delivery performance, such as delivery reliability, throughput times, and backlogs. Teams can quickly identify where bottlenecks are occurring and which levers are effective.

5. People

This dimension highlights employee workload and availability. Key figures such as overtime or absenteeism show when capacities are becoming scarce and where support is needed.

How SQCDP is implemented

The introduction of SQCDP follows a clear, proven process that is designed to integrate transparency and accountability into daily operations in a sustainable manner. First, companies define the relevant KPIs for each dimension, always with an eye to existing processes, regulatory requirements, and established standards at the plant. These key figures form the foundation on which daily evaluation is based.


The next step is to install the board physically or digitally at the point of value creation. It is crucial that it is visible and accessible to all employees. At the same time, routines for the daily shop floor meetings are established: time, duration, participants, and escalation paths. The teams fill in the fields on the board at the end of each working day and visualize the status using the traffic light system. Deviations are documented, responsibilities are clearly assigned, and measures are tracked in a binding manner.


This approach is deliberately pragmatic. Added value is not created through theoretical concepts, but through daily discipline, reliable processes, and continuous adjustments based on clear priorities. Over time, teams develop a common understanding of process stability, bottlenecks, and optimization potential, and systematically implement these insights.

Advantages of SQCDP

SQCDP creates a consistent management and control logic that is directly reflected in operational performance. Transparency regarding safety-related facts, quality deviations, delivery performance, cost development, and employee workload enables an objective assessment of day-to-day business. Decisions are made faster and based on facts, which significantly increases response speed.

In addition, the method promotes accountability. Since teams maintain and evaluate their own KPIs, awareness of cause-and-effect relationships increases. Problems are identified earlier and dealt with in a structured manner, rather than only becoming apparent after the fact. This has a positive effect on productivity, quality, and adherence to deadlines.

Another advantage lies in standardization across departments. When multiple lines or plants operate according to the same principles, a uniform control model emerges that facilitates comparisons and enables benchmarks. This consistency strengthens operational excellence, a value that has proven itself in traditional manufacturing organizations for decades.

Digital SQCDP

With increasing digitalization, the electronic mapping of SQCDP offers significant efficiency gains. Digital boards automate data collection and updating, reduce manual effort, and minimize sources of error. Key figures flow directly from existing systems such as MES into the daily overview. This ensures real-time transparency and increases the quality of decisions.

At the same time, digital SQCDP opens up new possibilities: trends are automatically identified, deviations are immediately flagged, and measures are tracked digitally. Dashboards can be used across locations, allowing for uniform control in larger production networks, while the basic idea remains unchanged. Especially in industrial environments, where traditional processes are increasingly being supplemented by data-driven methods, digital SQCDP fits seamlessly into existing structures. Companies benefit from the robustness of the classic model and, at the same time, from the speed of modern digitalization. It’s a combination that ensures long-term competitiveness.

If you want to set up your production to be data-intensive and reliable, we will be happy to support you.

With a modern MES, PDA, or MDA, you can create the database needed to ensure that methods such as SQCDP really take effect.

Der Beitrag What is SQCDP? erschien zuerst auf Selfbits GmbH.

]]>
ACTUAL and TARGET – REFA and MTM in comparison https://selfbits.de/en/learn/lean-management/actual-and-target-refa-and-mtm-in-comparison/ Mon, 17 Nov 2025 09:46:43 +0000 https://selfbits.de/uncategorized/actual-and-target-refa-and-mtm-in-comparison/ There are two popular methods for process design and optimization in industry: REFA and MTM. But what are the characteristics of these two methods? And what are the differences between them?

Der Beitrag ACTUAL and TARGET – REFA and MTM in comparison erschien zuerst auf Selfbits GmbH.

]]>

ACTUAL and TARGET – REFA and MTM in comparison

Time is the hardest currency in industry. Both REFA and MTM promise reliable time data on predominantly manual work processes for planning, calculation, and control, but they start at very different points. REFA provides a broad range of tools for recording, evaluating, and designing existing processes, while MTM breaks down processes into granular building blocks, making workflows measurable even at the planning stage. This article classifies the two most popular methods of process analysis and optimization, highlights similarities, strengths, and limitations, and provides recommendations on when each method makes more sense.

Image: Sample illustration

What is REFA?

The REFA Fachverband e. V. was founded in Berlin in 1924 and is now based in Darmstadt. REFA focuses on methods of work design, business organization, and corporate development.

At the core of REFA is time study. Real processes are described, divided into process sections, measured, and evaluated. The performance of the person being observed is assessed relative to the REFA standard performance; this results in basic times and, with surcharges, the target times for planning and calculation.

How does a REFA time study work?

1. Determine the intended use

This determines the care and accuracy of the study.

2. Description of the work to be measured

The work task, work procedure, and work method used must be specified in detail.

3. Take time measurements

The process is divided into stages and measuring points are defined. The times determined in this way are documented in the REFA time recording sheet.

4. Performance assessment

The measured power is normalized to a reference power, known as the REFA standard performance.

5. Statistical analysis

The times determined are checked for statistical validity. If necessary, measurements must be taken again.

6. Calculation of the base time

of the work process by adding up the target times of the individual process steps.

7. Calculation of the target time

To this, basic time, recovery times, distribution times, and any other surcharges are added.

8. Calculation of the target time per unit

The result is the target time per unit, fully documented for planning, calculation, or remuneration.

What is MTM?

Like REFA, MTM is both an organization and a method. The MTM method pursues similar goals, but starts earlier. While REFA measures the existing work system, MTM can already be used to plan manual processes.

MTM describes manual work using standardized movement and activity modules. Each module has a standardized time value in TMU (time measurement unit), whereby 1 TMU corresponds to 0.036 seconds. The time values are set so that they can be achieved by an average skilled employee over an entire working day.

(Link to an MTM-1 data card)

Typical building blocks are reaching, grasping, bringing, joining, or releasing; target times are calculated from their time values without a real production line existing.

In practice, there are different system levels: MTM-1 works very finely, while MTM-UAS and MTM-MEK are condensed and therefore significantly faster to use. This allows the analysis effort to be scaled to suit the question at hand without losing reproducibility.

Modern workflows also allow MTM analyses to be derived from VR/motion capture data, resulting in valid planning times even at very early stages.

Similarities and differences at a glance

Both methods provide standardized time data for planning, calculation, and control. Quality is achieved through clear task descriptions and correct application. The difference lies in the approach: REFA evaluates actual performance based on the real process, while MTM generates target times from predefined movement modules.

In practical terms, this means that new launches and variant comparisons benefit from MTM because layouts, cycles, and ergonomics can be evaluated prospectively. Series production, remuneration issues, and audits often require REFA time studies because they are based on actual data and are compatible with collective agreements and works agreements. Many plants use a combination of both: MTM provides the target value, while REFA calibrates and validates during ramp-up and series production.

Conclusion

MTM and REFA are not rivals, but tools for different phases of the same task: REFA measures, MTM models. Depending on the issue at hand, one provides a robust basis for the current situation, the other provides reliable target values. The most effective approach is a combination of the two: MTM for setting goals, REFA for verification in everyday practice.

Want to capture time data automatically?

Learn more about MDA and PDA with the Selfbits Manufacturing Suite.

Der Beitrag ACTUAL and TARGET – REFA and MTM in comparison erschien zuerst auf Selfbits GmbH.

]]>
Data security https://selfbits.de/en/learn/data-security/ Mon, 16 Jun 2025 15:30:07 +0000 https://selfbits.de/uncategorized/data-security/ Data security encompasses every measure aimed at protecting all types of company-owned data. Find out more here.

Der Beitrag Data security erschien zuerst auf Selfbits GmbH.

]]>

Data security

Data security encompasses every measure aimed at protecting all types of company-owned data. Besides shielding information from unauthorised access, it also preserves data integrity against manipulation and technical failure. In order to run business processes smoothly, data security must additionally guarantee the permanent availability of the data.

Image: Sample illustration

Protect your Business - Secure your data!

Cybercrime, cyber-defence, cyber-analytics – there is no shortage of buzz-words in the realm of IT security. All of them refer to the online world, yet none pinpoints a single specific issue. Taken together, however, these lofty terms highlight a trend that deserves our utmost attention!
Today, hacking attacks are a daily occurrence across every industry. The growing number of mobile, networked systems has two consequences. First, companies face a rising need to safeguard data: they must protect their systems from unwanted access, ensure high data quality and at the same time guarantee continuous availability. Only when these three components are secured can data be transformed into information and, through analysis, be put to valuable use. Second, the losses caused by cyber-crime are climbing sharply. According to Germany’s Federal Criminal Police Office, cyber-attacks alone racked up damages of more than €203 billion per year in Germany.
Long story short – data security is a matter for top management! Data security covers every measure whose goal is to secure a company’s data. The decisive factor when implementing security measures is the state the data are in at the moment of protection:

  • Stored data
  • Data being processed
  • Data in transit

Depending on that state, different protective options exist. Although no measure can provide 100 percent security, IT security can be improved continuously and systematically. Every facet of data security must be considered if the protection is to be as comprehensive as possible. In the era of the Fourth Industrial Revolution – better known as Industry 4.0 – data security is moving ever more into focus. From automated machine-data collection on the shop floor to the fully digitalised smart factory, vastly more data circulate today than in past generations.

Five Important Facets of Data Security

Every facet contributes a building block to data security within the company. If one facet is ignored or left without appropriate measures, the other safeguards also suffer. Only a complete view of all facets provides a solid basis for building further protective mechanisms.

Due to process-related, physical, and structural conditions, transport is often unavoidable and thus frequently falls into the category of necessary value-enabling waste. Nonetheless, the goal should be to reduce transport as much as possible and to keep walking and transport distances as short as possible.

1. Confidentiality

Stored or transmitted data are protected against unauthorised access. That means only authorised persons may reach stored data. The same care must be taken during transmission so that no one can intercept them. Symmetric or asymmetric encryption methods are generally used to ensure this facet.

2. Integrity

Integrity means maintaining the completeness and high quality of the data. Suitable measures protect them against unwanted change and manipulation. Safeguarding data against loss due to technical failure is also part of this facet. Access controls log entry to the data and make changes traceable. Regular data backups further reduce the risk of data loss.

3. Availability

If access rights exist for certain data, those data should be fully retrievable and usable at any time. That minimises the risk of system downtime. Synchronised storage solutions or cloud use are typical ways of achieving this.

4. Transparency

Ensuring transparency here means that the origin of the data and the way they are used can be proven. Clear assignment and legally compliant handling are guaranteed, for example, by logging access.

5. Authenticity

Proof of the data’s origin ensures authenticity. Determining the identity of the data is also an aspect of integrity. If the origin can be determined, verification can rule out the possibility of deception of a recipient. In digital data traffic, identity verification is often carried out using encryption, such as a public key infrastructure. This ensures that data arrives unchanged at the correct recipient when retrieved or sent.

Preserving data security in all its facets must be anchored throughout the entire corporate structure. To exploit all the opportunities offered by operational data collection, analysed data should be used not only in shop-floor management but also in corporate-goal planning. The cycle between data collection and calculation of target data also harbours optimisation potential for leaner processes.

Goal reached, job done?!

If only it were that simple. Digital transformation is advancing rapidly, constantly opening up new possibilities – alongside efficient innovations, new avenues for unauthorised access arise as well. The goals of data security therefore have to be pursued continuously. Identifying and assessing threats is best handled through recurring, dynamic processes viewed from an attacker-centric perspective.

Dynamic process for maintaining data security

First, define objectives that must be met for the sake of security. Next, develop attack scenarios to uncover possible weaknesses in the system. After a simulated attack, weak points can be analysed and counter-measures initiated. This process runs at regular intervals, ensuring ongoing optimisation.

Once planning and a suitable strategy are in place, only implementation remains. Outstanding examples worldwide show how crucial the right approach is during implementation, not just for cutting costs but also for generating value.

Five Fundamental Steps to Data Security in the Company

1. Communication

The most important component! Employees must be sensitised to data security. They need to understand why certain processes are necessary and be aware of the consequences of lacking data security. Only then can the measures be implemented correctly. Once understanding has been built, training sessions should be held and competences developed.

2. Device Status

Every device connected to a network should be checked for its security status and data. Regular status reviews and backups raise security with simple means.

3. Updates

The company’s security software must, of course, always be up to date. Carrying out regular updates must become a fixed process.

4. Transparency

Work processes should be clearly defined so that access and authorisation rights can be granted accordingly. Only persons who actually need access to certain data to perform a task receive it.

5. Encryption

Any decrypted access to corporate data should occur only for the duration of access. Company personnel should naturally secure all devices in the network with strong passwords. When away from the workplace, all accesses must be blocked.

Conclusion

Combining information gathering and employee awareness forms a solid foundation for data security. Additional security standards can be built on that and continually refined.

Instead of leaning on the buzz-words mentioned at the outset, it pays to engage with the practical measures required for data security.

Der Beitrag Data security erschien zuerst auf Selfbits GmbH.

]]>
Andon https://selfbits.de/en/learn/lean-management/andon/ Mon, 16 Jun 2025 14:59:49 +0000 https://selfbits.de/uncategorized/andon/ Andon is a visual management method and part of the Toyota Production System. In Japanese, the term “Andon” means (paper) lamp. With classic traffic lights and monitors (“boards”) in production, the Andon method forms a visual control system for the production status.

Der Beitrag Andon erschien zuerst auf Selfbits GmbH.

]]>

Andon

Andon is a visual management method and part of the Toyota Production System. In Japanese, the term “Andon” means (paper) lamp. With classic traffic lights and monitors (“boards”) in production, the Andon method forms a visual control system for the production status.

Bild eines Werkers und Andon Boards

Image: Example of an Andon board

What is Andon?

The Andon method encompasses both the visualization of the status of a machine or process and the ability for employees to change this status.

The original form of Andon can now be found in many places, even outside of production. They are ubiquitous not only on machines in manufacturing, but also on self-service checkouts and deposit machines: simple stacked traffic lights with the colors red, yellow, and green that clearly indicate the current status.

Warnings, errors, and malfunctions can be viewed remotely by staff, and the location of the malfunction is immediately clear. The self-explanatory color coding also clearly indicates the problem category as reaching a warning or intervention limit.

In addition to the ubiquitous traffic light lamps, other forms of the Andon method have also been developed. Today, there are Andon boards, which can range from simple counters to elaborate dashboards, and various devices for workers to report status, known as Andon cords.

Andon boards

The Andon board is a visual control device used in manufacturing. In its simplest form, it can display only the current number of units or the status of the line. More advanced versions display any status information about the respective production line or manufacturing process. As part of the Selfbits Manufacturing Suite, our customers use the display for OEE, current production progress, and current production figures and quality data.
The data for the display is collected automatically via machine data collection or manually by employees via production data collection and aggregated and displayed in near real time.

Andon-Cord

The Andon cord is another tool for recording the current status. As the name suggests, there may be pull cords on a production line, for example, that enable an employee to report an error or difficulty so that they can receive assistance. Similarly, an Andon cord can also be designed as an emergency stop switch, in the form of a pull cord or an actual switch. This enables the employee to initiate a conveyor or plant shutdown in the event of problems. Here, the use of Andon cords also directly records the location of the problem through the report itself. This reduces the need for inquiries and enables a quick response.

Summary

Features

  • Visualization of the operating status of a machine/production line
  • Reporting a discrepancy in order to remedy the situation
  • Stop the production plant in case of danger and stop the further flow of substandard products.

Advantages

  • Improved transparency – Immediate availability of information increases transparency
  • Improved productivity – With immediate visibility of problems, they can be resolved more quickly.
  • Increased flexibility – With Andon, workers can take on more responsibility because the current status is transparent.
  • Increased plant availability – With Andon, faults can be identified and rectified more quickly.
  • Save time and money – Andon is a simple method for exchanging information on the shop floor.

Der Beitrag Andon erschien zuerst auf Selfbits GmbH.

]]>
Availability in OEE calculation https://selfbits.de/en/learn/digital-production-i40/availability-in-oee-calculation/ Mon, 16 Jun 2025 14:55:25 +0000 https://selfbits.de/uncategorized/availability-in-oee-calculation/ Availability is one of the three factors used to calculate OEE. Below, we show how it is defined and calculated.

Der Beitrag Availability in OEE calculation erschien zuerst auf Selfbits GmbH.

]]>

Availability in OEE calculation

Availability is one of the three factors used to calculate OEE. Below, we show how it is defined and calculated.

Image: Sample illustration

Introduction

Availability is a percentage indicator. It describes the extent to which a machine’s capacity is used for value-adding functions in relation to its planned availability. It is calculated as the ratio of the actual production time to the planned busy time.

The planned operating time results from the total reference period (e.g., one calendar week) minus non-production periods (e.g., weekends) and scheduled downtimes (e.g., routine cleaning and machine maintenance).
To determine the main operating time, actual stoppages, breakdown-related interruptions and the actual setup time are deducted from the planned operating time; the main operating time is therefore an actual time, i.e., one measured in reality, which makes it crucial for an accurate OEE calculation to record this time as precisely and immediately as possible. As mentioned above, availability is obtained as a percentage by dividing the main operating time by the planned operating time.

Availability =

Main Operating Time

Planned Operating Time

From this formula it follows that availability must always refer to a specific time interval. It is generated periodically and on demand. Values range from 0 % up to a maximum of 100 %. The larger the number—meaning the higher the utilisation of the machine relative to the planned time—the better. In the context of OEE, availability is also called the utilisation rate and forms the third factor, alongside performance and quality rate, in the equation. Availability, like the OEE derived from it, is relevant for shop-floor operators, plant management and executive leadership, because OEE is widely accepted as the benchmark and key indicator of an industrial facility’s productivity.

Source: VDMA 66412-1 : 2009-10

Der Beitrag Availability in OEE calculation erschien zuerst auf Selfbits GmbH.

]]>
Poka Yoke https://selfbits.de/en/learn/poka-yoke/ Mon, 16 Jun 2025 14:40:48 +0000 https://selfbits.de/uncategorized/poka-yoke/ Poka Yoke, which means “avoid unfortunate mistakes” in Japanese, is a process design principle aimed at avoiding errors.

Der Beitrag Poka Yoke erschien zuerst auf Selfbits GmbH.

]]>

Poka Yoke

Avoid “unfortunate mistakes” by using Poka Yoke to uncover errors and reduce opportunities for error.

Image: Sample illustration

Introduction

Description:

The Japanese term Poka Yoke (meaning “avoiding unfortunate mistakes”) can actually be defined simply as “avoiding mistakes”. The fundamental goal is a zero-error state. At the same time, however, it is recognized that no system and no human being is able to completely avoid unintentional errors. They are part of every production and process. Poka Yoke therefore attempts to reduce or completely avoid errors with mostly simple but effective systems and to reduce the impact of errors in the production process on the end product.

Shigeo Shingo is considered the inventor of the principle and the linguistic origin is derived from Go and Shogi (a Japanese variant of chess): In these, poka denotes a wrong move. In a broader sense, it means “stupid mistake, blunder”. Yoke comes from the verb yokeru, which means “to avoid”.

As Poka Yoke was developed as part of the Toyota Production System, it has its origins on the store floor. However, the principles also apply in the office, warehouse and service sector, as the examples below clearly show.

Advantages

Processes in which fewer errors occur increase quality, but also have a significant impact on productivity. As the quality factor increases, the OEE naturally also increases. Throughput times can often be reduced with less rework and simpler processes. Of course, there are also cost factors.

Application and basic rules

The application of Poka Yoke is very simple: trace errors back to their sources and check how you can prevent the error from ever occurring again.

The six basic rules help with the application and can be guidelines for a suitable solution

  • Try not to spend money
  • Simpler is better
  • Do not make the use of Poka Yoke optional
  • Do not confuse measuring instruments with errors
  • Reduce decisions

Procedure

  1. First identify the errors that occur in your production
  2. Use the Pareto principle to decide which you should work on first
  3. Find out the process step in which the observed error occurs
  4. If your processes are not standardized, this should be the first step. Because designing a Poka Yoke for a non-repetitive process is a waste of time.
  5. Think about what needs to be done to ensure that this error never occurs. This could be a change to the component, a change to the device or machine or other simple support for the workers at the workplace.

Examples from production

Poka Yoke in Part Design

Chamfer on a corner of a component to enable only one correct placement or connection (e.g. also SIM cards)

Staggered, asymmetrical placement of drill holes to create only one mounting option

Poka Yoke in the design of devices and machines

Position sensors only allow a pressing process to start when the component is inserted correctly

Poka Yoke in process design

Query of eye color in forms for customer service employees to ensure eye contact with the customer

Examples from Everyday Life

  • Car key, USB type C and Lightning cable: They can be used on both sides and therefore cannot be used incorrectly.
  • USB type A cables: they can only be plugged in in one orientation, often to the frustration of users
  • Overflow drains in washbasins and bathtubs: They prevent the bathroom from flooding
  • Cash machines: They first return the card and then dispense the cash withdrawn. This means that the card is less likely to be forgotten.
  • Dispensing nozzles: They release their locking mechanism as soon as the tank is full, thus preventing overfilling and fuel leakage

Der Beitrag Poka Yoke erschien zuerst auf Selfbits GmbH.

]]>