Documentação SkyCiv

Seu guia para o software SkyCiv - tutoriais, guias de instruções e artigos técnicos

SkyCiv Structural 3D

  1. Casa
  2. SkyCiv Structural 3D
  3. Aplicação de cargas
  4. Load Combination Schemas

Load Combination Schemas

In the most recent update to the load combination generator, we have simplified the inner workings of the module and added the following functionality:

  • Load combination generation is defined by a single .json file (Esquema) for each standard.
  • Load groups and combinations can be created without having to assign loads first in the S3D modelling space.
  • Padrões, which are based on old theExpand Wind Loads” caixa de seleção, now work with any load cases.

Nomenclature

In the last version of the load combination generator, the load case name that was used for many things:

  • Act as a unique ID.
  • Differentiate between load super case e é a distância horizontal do beiral ao cume, and always require one value in each (leading to bizarre ones like Morto: morto).
  • Remain human readable, all the while remaining small enough for dropdown menus by contracting words (Viver: Q-dist-roof-floor).

In the most recent update, we have divided load case names into symbols, which will be small to be used to code and when space is limited (when naming the load combinations for example: 1.25D1 + 1.5La + 1.5Ld + 0.5Sl + 0.5T), e labels, which will aim to be very descriptive. Ambos symbols e labels will have to be unique within a given standard. We will also allow casos de carga to be named the same as the load super case, usually for the default load case (most commonly used). The two examples used above would be split like this:

{“D”: “Dead”}
{“Ldr”: “Live - Concentrated, Roofs, Floor”}

A symbol must be composed minimally of one uppercase letter, which defines the super case. A super case is a new concept, used to group similar load cases that usually act together. A super case label is given at the start of the load case label (before the dash). In the example above (Eurocódigo) the load case Ldr would be part of the L super case (named Live in the label), alongside other load cases like Ldd and Ldo. Por trás das cenas, a super case is mainly used to enforce the filtering rules, that is to determine which schema rows need to be kept and which need to be removed.

The first part of the label (before the dash) is the name of the super case. The second part is a description, using commas to separate categories from sub-categories. The second part is optional, but only one load case can take the default super case spot.

Standard Schemas rows

Each standard has a schema which completely defines all of the possible load combinations for this standard. A Schema.json file is quite straightforward, but it can become quite long, especially in standards (like Eurocode) that require a multitude of permutations. To take a simple example, take the following example requirement.

1.2*D + 1.5*L + (0.5*S or 0.5*W or 0.5*T)

To convert this into our schema, we need to break it down into each possible permutation:

1.2*D + 1.5*L
1.2*D + 1.5*L + 0.5*S
1.2*D + 1.5*L + 0.5*W
1.2*D + 1.5*L + 0.5*T
1.2*D + 1.5*L + 0.5*S + 0.5*W
1.2*D + 1.5*L + 0.5*C + 0.5*T
1.2*D + 1.5*L + 0.5*T + 0.5*S
1.2*D + 1.5*L + 0.5*S + 0.5*C + 0.5*T

Once every load combination is listed out in this manner, you can build the schema by following these steps:

  1. Use each load case key and coefficient to create a schema row object.
  2. Name each row with a unique identifier (since this is going to be an object). The convention is to use dashes to separate different elements of the name.
  3. One level up, group the rows into criteria (força, facilidade de manutenção, accidental, etc.)

The end result should look something like this:

"linhas": {
  "força":{
    "A-1-u": {"D": 1.40},
    "A-2a-u":{"D": 1.25, "L": 1.50, "Ls": 1.50},
    "A-2b-u":{"D": 1.25, "L": 1.50, "Ls": 1.50, "S": 1.00},
    "A-2c-u":{"D": 1.25, "S": 1.50, "C": 0.40},
    "A-3a-u":{"D": 1.25, "S": 1.50}
  }
}

Load Combination generation algorithm

The algorithm goes through several steps to generate the final load combination object:

  • A schema as defined above is needed. It will be passed to the main load combination generation function.
  • An object is created to group the number of casos de carga de pattern. Por exemplo, let’s look into a request for the load cases below:
2 Dead load case, with a merge pattern
4 Wind load cases, with an individual pattern
1 Snow load cases, with an individual pattern
2 Dead load cases, with a merge pattern

Grouping the padrões de é a distância horizontal do beiral ao cume will give the following object, which will be passed to the main load combination generation function.

input_by_case = 
  {
    "D": {"mesclar": [2, 2], "individual": []},
    "C": {"mesclar": [], "individual": [4]},
    "S": {"mesclar": [], "individual": [1]}
  }
  • The last two arguments are filtering objects, which allow for filtering by criteria or by schema key.
  • Once it has all of the required arguments, the main load combination generation function is called. This function goes through multiple nested loops to generate every required combination, which are explained in the following bullet points, and illustrated in the subsequent figure.
    • At the highest level, it loops through the schema rows. Each row is checked to see if it should be kept or skipped at this step, using the filtering objects and specific logic that is described in the section below.
    • Nested into the first loop is a second one, which loops through each requested é a distância horizontal do beiral ao cume in the schema row. If the requested é a distância horizontal do beiral ao cume also exists the schema row (requests are summarized in the input_by_case object), then we proceed to the next level.
    • Nested into the second loop is a third one, which loops through each possible pattern to see if there are load groups to generate within them, and runs the function to name and generate them when they do.
    • Once all load cases in the schema row have been generated and named, they are recombined (alongside their coefficients) into one or multiple load combinations.

  • This process is repeated for each row of the schema, pushing all of the generated load combinations into the final load combination object.

It is worth nothing that all of the logic related to patterns is happening while inside a single schema row. Knowing this is important to understanding the behavior of patterns. The merge pattern, por exemplo, does not allow merging anything other than the load case it is assigned to. This means that you cannot:

  • Merge different load cases together, like trying to merge D1 and L1 load groups.
  • Merge identical load cases on different rows of the input table. Por exemplo, in the example given in point #2 acima, we are asked to generate 2 dead loads using the merge pattern on two separate rows. The end result combinations would then look like to something like this:
1.2*D1 + 1.2*D2 + 1.5*L
1.2*D3 + 1.2*D4 + 1.5*L
1.2*D1 + 1.2*D2 + 1.5*L + 0.5*S
1.2*D3 + 1.2*D4 + 1.5*L + 0.5*S
1.2*D1 + 1.2*D2 + 1.5*L + 0.5*C
1.2*D3 + 1.2*D4 + 1.5*L + 0.5*T

Auto filtering unnecessary load combinations

While the above algorithm is functional without any filtering, it can lead to redundant load combinations, which leads to extra computing time and redundant results. Take the following load combinations:

1.2*D + 1.5*L
1.2*D + 1.5*L + 0.5*S
1.2*D + 1.5*L + 0.5*W
1.2*D + 1.5*L + 0.5*T

If we have a single dead load case, these four load combinations will result in identical load combinations:

1.2*D
1.2*D
1.2*D
1.2*D

To avoid this situation, four rules are used which each contain some slight exceptions. Primeiro, let’s have a look at the rules. The default state is for the combination to be kept and the rules are used to determine which to exclude.

Filtering by criteria

This case is pretty self-explanatory. Se o criteria is not requested, all schema rows associated with that criteria are discarded.

Filtering by characters in the schema key

Schema keys are usually comma separated pointers to the original reference. Por exemplo, in the NBCC example below, the key has three components:

  • A: The first term is usually the main reference, reference the table in which this part of the loads are taken.
  • 2b: The second term is usually a unique identifier for the load combination inside the table.
  • você: The third term is usually reserved to indicate when a large number of load combinations are permuted with a slight modification. Por exemplo, it can indicate if the dead loads in the load combination are favorable ( f ) or unfavorable ( você ).
{
  "força":{
    "A-2b-u":{"D": 1.25, "L": 1.50, "Ls": 1.50, "S": 1.00},
  }
}

Filtering in the schema key can be done for any of these terms. Por exemplo, if we want to filter by the third term, we can add the following filter, which will create a filtering dropdown for this term:

"name_filters": {
  "Força": {
    "Usaremos a mesma estrutura do exemplo Unidirecional para consistência": {
      "posição": 2,
      "tooltip": "",
      "items": {
        "Favorable": "f",
        "Unfavorable": "você"
      },
      "defaults": ["Favorable", "Unfavorable"]
    }
  }
}

All of the possible dropdown names and associated terms must be listed under “items”. Only the schema rows with matching symbols will be kept. If it is required to keep a schema row independently of what is entered in the filter, the term can be left blank. Any schema key that does not contain all of the matching dropdown terms will be discarded.

Redundant combinations

If a schema row is not filtered out by the first two steps, it moves on to step number three. Nesta etapa, the redundancy issue from the above example is addressed. Para fazer isso, we need to look at two objects simultaneously, schema row and the sorted input_by_case object (see description above), which describes which load cases have been requested. If the schema row contains any super case which the input_by_case object does not, the load combination is removed. Take, por exemplo, the following schema row:

"A-2a-u":{"D": 1.25, "L": 1.50, "S": 1.50}

and the following input_by_case object:

input_by_case = 
{
  "D": {"mesclar": [], "individual": [1]},
  "L": {"mesclar": [], "individual": [4]}
}

Neste exemplo, the schema row contains a super case S which has not been requested. Keeping this row would lead to a load combination that would be identical to the load combination associated with the schema row below, so it is removed.

"A-1a-u":{"D": 1.25, "L": 1.50}

Exceptions

While this behavior is usually desirable, there are cases where NOT deleting a row when the load case is absent leads to a much simpler schema. Por exemplo, if we have horizontal earth loads that should be added to every combination of schema, but are not always present, we could copy and paste all of the load combinations and modify the schema key for the new rows with a suffix like “h” for horizontal earth loads. Alternativamente, we can simply add the horizontal earth load to all of the cases and add a keep exception to the load case in the meta data. That way, if the load case is not requested, it will not show up, but the row will still be kept. The result looks something like this in the schema’s meta property:

"H": {
  "label": "Terra lateral - Unfavorable",
  "rank": 1,
  "exceptions": ["keep"],
  "old_labels": []
},

Superfluous combinations

If a schema row is not filtered out by the first three steps, it moves on to step number four. Nesta etapa, the issue of matching specific load cases between the schema and what is requested. If a schema row and a request have matching super cases, but the specific load case requested is not in the schema, the row will not be kept. Take, por exemplo, the following schema row:

"A-2a-u":{"D": 1.25, "Sl": 1.50}

and the following input_by_case:

input_by_case = 
{
  "D": {"mesclar": [], "individual": [1]},
  "Sh": {"mesclar": [], "individual": [1]}
}

Neste exemplo, both the schema row and the request have matching super cases. Contudo, the request requires a combination with Sh, which the schema row does not provide. Por isso, the schema row is not kept.

Exceptions

De novo, this behavior is usually desirable, but can lead to problems. One such problem is when standards have load cases that share a super case, but do not act simultaneously. Por exemplo, em ASCE, the wind loads W and tornado loads Wt do not act simultaneously, though they share the same super case W. When we run into this problem, we can add an exception to switch to another super case before the code runs in the meta data. Por trás das cenas, the symbol following the “->” characters will be attributed to the load case, which will simulate the load cases acting in that super case. The result looks something like this in the schema’s meta property:

"Em peso" : {
  "label": "Vento - Tornado",
  "rank": 8,
  "exceptions": ["supercase->X"],
  "old_labels": []
},

In the case above, a super case “C” will be swapped to “X” before the code runs. This feature can also be used to send group load cases that have unique super case symbols together.

Este artigo foi útil para você?
Sim Não

Como podemos ajudar?

Vá para o topo