Basic Assignments
 
Options & Settings
 
Main Time Information
Color Code: Yellow
Assigned To: Brandon Moore
Created By: Brandon Moore
Created Date/Time: 2/23/2022 2:50 pm
 
Action Status: Blank (new)
Show On The Web: Yes - (public)
Priority: 0
 
Time Id: 8782
Template/Type: Brandon Time
Title/Caption: Adilas Time
Start Date/Time: 3/24/2022 9:00 am
End Date/Time: 3/24/2022 12:15 pm
Main Status: Active

Sorry, no photos available for this element of time.


Notes:

The morning meeting started out normally and then sort of morphed into a multi hour long meeting with no real stop or start between the different topics and sections. We started out and Sean was checking in and lightly going over some of the BioTrack API socket questions. We got into a discussion on promises made and client and user expectations and how deep we want to go into those areas. Some of our clients expect an easy button for every possible wish or desire. It just doesn't work like that. Yet those expectations are real and valid based on user requirements and user demands. That puts tons of pressure on us as a software system. There just is no physical way to make it do everything, for every person.

Wayne joined the meeting and the conversation moved over to slow queries, database indexes, record counts, volume of data, and all kinds of other topics. We spent some major time going over potential problems with our flexible "LIKE" searches on the parts homepage (flexible wildcard searches across multiple database fields and columns). We need the flexibility, and we have trained our people to rely on that, but because it is so open, it is causing issues when put under a huge data load (not thousands but millions or multiple millions of records). We can handle quite a bit with no problem. When you really overload it, it starts groaning and squeaking under intense pressure. Technically, that is called scale.

The amount of data that some of our clients are generating and recording is causing a volume or scale issue. We got into deep logic and how we could speed things up if we were able to use indexes, exact searches, and get rid of huge list look-ups (using the SQL IN clause) or super flexible text searches using like commands (SQL LIKE clauses with wildcards). Basically, when doing some of these functions, the databases skip the indexes and end up looping over millions and millions of records. We spent some time talking about a number of possible solutions.

We had a break where Wayne had to go pick up his kid from school. In the meantime, John and I were talking about some of his projects. He was showing me a bunch of stuff that he has on a Jira board for the discount engine. I was kinda getting overwhelmed and depressed. So much stuff to do and manage. Sometimes it feels like it is all over the place and it never ends.

Wayne got back and we flipped back over to database queries possible options to help speed things up and handle a huge scale or a huge load of data. We talked about a combo type approach where we have to include tweaks to the database, code changes, UI and UX (user interface and user experience) changes, and other backend management changes. It may end up being a combo type package or approach to fix some of these problems.

That topic and discussion lead us to talk about prior or earlier decisions that were made years ago. We talked about why certain things were decided upon and implemented. It is very interesting and the story keeps rolling out in front of us. What we have now is a combination of history, situations, decisions, and even future wants and needs. It all mixes together and makes a complex solution. Some of the why and what we did is super important.

Having said that, things keep changing and morphing. We talked about building things for a non-static environment. Dealing with scale (up or down). That lead us to possible daily or real time mini aggregates on quantities and other key points and factors. Basically, ways of summarizing data and getting to more of a business intelligence type level - quick counts, sums, totals, averages, mins, maxes, etc.

Talking about the aggregation processes lead to talks about database triggers, update routines, scheduling, clean-ups, automation, manual checks and overrides, and the list goes on. Along the way, we kept talking about how important the inventory pipeline and tracking the ins and outs is so critical to these values, stats, and numbers. Steve joined us and we got into update functions, methods, table row locking, more manual updates, and reconciliation options to make sure all is well. You basically need the transactions (what happens and when - this is the details or the historical record). You also need the sums or totals (these are the running or current aggregates). These two pieces, transactions and aggregates play different roles.

As part of our discussion, we were looking at one-to-many database table and column relationships and how things are handled currently. It gets deep quickly. We then started talking about breaking shared tables into corp-specific tables and building smaller corp-specific aggregation type tables. This lead us to a small discussion on sizes of tables and when ones are already broken down into corp-specific tables and which ones may be up for review.

On the mini aggregate tables (quick sums, counts, and totals) we could go different ways. Do we want a historical record of the aggregates or do we just want to keep current (now) sums, counts, and totals? If you add historical, you start adding dates and get new records every day (assuming things are changing) or if you skip the dates, then there are less records, things may be quicker for the current but could take longer if going back in time. Maybe both... one to hold the historical aggregates over time and one to hold the quick and dirty (real-time current look or roll call report).

The deeper we got, the more that settings and database options came into play. We have a future project called fracture in the planning. We need to be able to have and use settings on what our users and corporations (worlds) want to see and use. We have tons of data and tons of records. Okay, great, what do you, as a user or end user, want to see, show, display, sort, etc.? How does that need to be organized? It goes deep and gets into advanced settings, display options, and being able to save layout and configuration options per person, per page, per corporation. That all needs to be included in the fracture stuff, along with the other transactional and mini aggregate database options listed on this page.

Once you know where you want to go... You can get there (hopefully). If you plan on being the only one to get there, the trail doesn't have to be very good. If you plan on repeating the journey, the trail needs to be even better. If you want a bunch of people and/or companies to complete the journey, you will need a road, not a trail. All part of the process.

Going back to tables, we talked about two big corp-specific tables in the system that will need a buddy mini aggregate table (or more). The big transactional tables are the po/invoice line items and the time sub inventory tables. Those two tables (that are already corp-specific) will need helper tables to keep track of both transactional aggregates per date (semi historical summaries) and also current non-historical mini aggregates for the quick and current view (no dates). Another way of putting this is making location specific pre math calculations vs having to go and re-sum things up or count things based on complex look-ups. You virtually do the known math per item, per package, and per location before it is needed. Then those numbers and values are quickly ready and available as mini aggregates. If you need more details or the blow by blow, you just go back to the transactional tables or records (different tables).

Some of our current pain is in the sales and POS (point of sale) systems such as carts and inventory tracking. We can do it, but if it gets into hundreds of thousands or millions upon millions of records, you run into the scale factor and issues.

Just for our notes, a couple key pieces are corporations and locations. If everybody just had a single location, some of this would be really easy. If you have more than one location, the solution needs to scale and keep track of mini aggregates based off of corporation and locations. That is a huge key that often gets overlooked. Plan for multiple (unlimited) locations. Here are some possible columns that may be needed for some of these mini aggregate tables for mini aggregate quantities - auto id, corp id, store id, part id, sub reference id (sub id), part count, and maybe a date if doing historical.

Switching subjects again, we need to charge more to clients for harder tasks and functions. I had a friend that was talking about software. He said, you tend to get two different types. You get cheap software or custom software. Hardly ever do you get cheap custom software. Custom does cost time and money. That is the way it is.

After Wayne left, Steve and I stayed on for a bit to go over some other things. We talked about gift cards, coupons, and other upcoming projects. I showed him a PO rounding demo and what I have done so far. The next question was... how deep are we going to dive per subject? Lots of options.