This project is mirrored from https://github.com/cockroachdb/cockroach. Pull mirroring updated .
  1. 30 Nov, 2020 1 commit
    • craig[bot]'s avatar
      Merge #56974 · 76935fd6
      craig[bot] authored
      56974: roachtest: introduce new tpce/c=5000/nodes=3 roachtest r=nvanbenschoten a=nvanbenschoten
      
      This commit introduces a new roachtest called `tpce/c=5000/nodes=3`, which
      uses a Dockerized version of https://github.com/cockroachlabs/tpc-e to run
      an instance of the TPC-E benchmark. The setup is not spec-compliant because
      it uses a merged Driver and Tier A process, allowing for easier deployment.
      
      The benchmark first IMPORTs a 5000 customer dataset from Google Storage and
      then runs the workload for an hour. The workload spits out a report at the
      end of the run that looks like:
      ```
      _Transaction_______|__pass_mix___pass_lat__|______txns______txns/s_______mix__________avg__________p50__________p90__________p99_________pMax
       BrokerVolume      |     false       true  |      2909        4.85    4.956%     18.345ms      8.689ms     52.051ms     63.575ms     77.651ms
       CustomerPosition  |     false       true  |      7726       12.88   13.163%     13.311ms      7.941ms     35.595ms     55.825ms     74.606ms
       DataMaintenance   |       n/a        n/a  |        10        0.02    0.017%      17.94ms     19.147ms     25.083ms     27.172ms     27.172ms
       MarketFeed        |     false       true  |       534        0.89    0.910%    145.943ms    140.083ms     179.87ms     245.24ms    265.666ms
       MarketWatch       |     false       true  |     10679       17.80   18.194%     12.509ms      7.404ms     32.858ms     45.705ms     58.103ms
       SecurityDetail    |     false       true  |      8301       13.84   14.142%     28.778ms      8.021ms     97.732ms     114.69ms    148.745ms
       TradeLookup       |     false       true  |      4753        7.92    8.098%     10.749ms      7.862ms      22.47ms     35.241ms     47.571ms
       TradeOrder        |     false       true  |      5999       10.00   10.220%     35.219ms     17.499ms    103.776ms     134.59ms    172.817ms
       TradeResult       |     false       true  |      5341        8.90    9.099%     47.782ms     27.721ms     114.69ms    140.083ms    174.554ms
       TradeStatus       |     false       true  |     11265       18.77   19.192%     19.358ms      6.123ms     71.681ms      89.32ms    115.843ms
       TradeUpdate       |     false       true  |      1189        1.98    2.026%     18.025ms     15.062ms      34.89ms     46.629ms     55.825ms
      
      Customers     :     5000
      Nominal  tpsE :    10.00
      Measured tpsE :     8.90
      tpsE fraction :   89.02% (80% - 100% passing)
      
      Reported tpsE :     8.90
      ```
      
      This report is not yet hooked up to roachperf.
      Co-authored-by: default avatarNathan VanBenschoten <[email protected]>
      76935fd6
  2. 29 Nov, 2020 1 commit
    • craig[bot]'s avatar
      Merge #57106 · 4970b95e
      craig[bot] authored
      57106: delegate: implement SHOW REGION FROM DATABASE r=ajstorm a=otan
      
      Looks like SHOW REGION FROM DATABASE has an optional db argument
      throughout the spec, so why not implement both it ways.
      
      Release note (sql change): Implement SHOW REGION FROM DATABASE and SHOW
      REGION FROM DATABASE db, which shows all regions for the given database,
      as well as whether that region is the primary region.
      Co-authored-by: default avatarOliver Tan <[email protected]>
      4970b95e
  3. 28 Nov, 2020 5 commits
    • craig[bot]'s avatar
      Merge #57186 · 54cdc5fe
      craig[bot] authored
      57186: migration: introduce fence versions r=irfansharif a=irfansharif
      
      The migrations infrastructure makes use of internal fence versions when
      stepping through consecutive versions. We'll need to first bump the
      fence version for each intermediate cluster version, before bumping the
      "real" one. Doing so allows us to provide the invariant that whenever a
      cluster version is active, all nodes in the cluster (including ones
      added concurrently during version upgrades) are running binaries that
      know about the version.
      
      It's instructive to walk through how we expect a version migration from
      v21.1 to v21.2 to take place, and how we behave in the presence of new
      v21.1 or v21.2 nodes being added to the cluster.
        - All nodes are running v21.1
        - All nodes are rolled into v21.2 binaries, but with active cluster
          version still as v21.1
        - The first version bump will be into v21.2(fence=1), see the
          migration manager above for where that happens
      Then concurrently:
        - A new node is added to the cluster, but running binary v21.1
        - We try bumping the cluster gates to v21.2(fence=1)
      
      If the v21.1 nodes manages to sneak in before the version bump, it's
      fine as the version bump is a no-op one (all fence versions are). Any
      subsequent bumps (including the "actual" one bumping to v21.2) will fail
      during the validation step where we'll first check to see that all nodes
      are running v21.2 binaries.
      
      If the v21.1 node is only added after v21.2(fence=1) is active, it won't
      be able to actually join the cluster (it'll be prevented by the join
      RPC).
      
      We rely on the invariant we introduced earlier that requires new
      user-defined versions (users being crdb engineers) to always have
      even-numbered Internal versions. This reserved the odd numbers to slot
      in fence versions for each user-defined one.
      
      Release note: None
      
      ---
      
      Only the last two commits here are of interest. Earlier commits are from
      #57154 and #57118.
      Co-authored-by: default avatarirfan sharif <[email protected]>
      54cdc5fe
    • irfan sharif's avatar
      migration: introduce fence versions · 6787eecf
      irfan sharif authored
      The migrations infrastructure makes use of internal fence versions when
      stepping through consecutive versions. We'll need to first bump the
      fence version for each intermediate cluster version, before bumping the
      "real" one. Doing so allows us to provide the invariant that whenever a
      cluster version is active, all nodes in the cluster (including ones
      added concurrently during version upgrades) are running binaries that
      know about the version.
      
      It's instructive to walk through how we expect a version migration from
      v21.1 to v21.2 to take place, and how we behave in the presence of new
      v21.1 or v21.2 nodes being added to the cluster.
        - All nodes are running v21.1
        - All nodes are rolled into v21.2 binaries, but with active cluster
          version still as v21.1
        - The first version bump will be into v21.2(fence=1), see the
          migration manager above for where that happens
      Then concurrently:
        - A new node is added to the cluster, but running binary v21.1
        - We try bumping the cluster gates to v21.2(fence=1)
      
      If the v21.1 nodes manages to sneak in before the version bump, it's
      fine as the version bump is a no-op one (all fence versions are). Any
      subsequent bumps (including the "actual" one bumping to v21.2) will fail
      during the validation step where we'll first check to see that all nodes
      are running v21.2 binaries.
      
      If the v21.1 node is only added after v21.2(fence=1) is active, it won't
      be able to actually join the cluster (it'll be prevented by the join
      RPC).
      
                                    ---
      
      We rely on the invariant we introduced earlier that requires new
      user-defined versions (users being crdb engineers) to always have
      even-numbered Internal versions. This reserved the odd numbers to slot
      in fence versions for each user-defined one.
      
      Release note: None
      6787eecf
    • craig[bot]'s avatar
      Merge #57155 #57156 · effee639
      craig[bot] authored
      57155: clusterversion,*: remove VersionContainsEstimatesCounter r=irfansharif a=irfansharif
      
      This, and all surrounding migration code and tests, are now safe to
      remove. It mostly served as documentation, which we've moved to the
      field itself. Part of #47447. Fixes #56401.
      
      (While here, Let's also tell git that `versionkey_string.go` is a
      generated file.)
      
      Release note: None
      
      ---
      
      First commit is from #57154.
      
      57156: sql,clusterversion: remove VersionAuthLocalAndTrustRejectMethods r=irfansharif a=irfansharif
      
      It's an old cluster version, introduced in the 19.2 release cycle. It's
      now safe to remove. Part of #47447. Fixes #56398.
      
      
      Release note: None
      
      ---
      
      See only last commit. First two are from #57154 and #57155 respectively.
      Co-authored-by: default avatarirfan sharif <[email protected]>
      effee639
    • irfan sharif's avatar
      sql,clusterversion: remove VersionAuthLocalAndTrustRejectMethods · ebf8c3e8
      irfan sharif authored
      It's an old cluster version, introduced in the 19.2 release cycle. It's
      now safe to remove. Part of #47447. Fixes #56398.
      
      Release note: None
      ebf8c3e8
    • irfan sharif's avatar
      clusterversion: introduce even internal version constraint · 8e8ccb14
      irfan sharif authored
      For versions introduced during and after the v21.1 release, we introduce
      the constraint that internal versions must always be even. We intend to
      use the odd versions as internal fence versions for the long running
      migrations infrastructure. This will come in future commits.
      
      Release note: None
      8e8ccb14
  4. 27 Nov, 2020 12 commits
    • craig[bot]'s avatar
      Merge #57118 · bbb7e7c6
      craig[bot] authored
      57118: migration,server: plumb in initial version to the migration manager r=irfansharif a=irfansharif
      
      It'll let us return early, and makes the manager "more functional" in its
      behavior. 
      
      We also promote the clusterversion type to a first-class citizen in the
      external facing API for pkg/migration. This package concerns itself with
      migrations between cluster versions and we should have our API reflect as much.
      
      The proto changes are safe, we haven't had a major release with the previous
      proto definitions.
      
      Release note: None
      Co-authored-by: default avatarirfan sharif <[email protected]>
      bbb7e7c6
    • irfan sharif's avatar
      migration: promote clusterversion to first-class citizen... · 3783e636
      irfan sharif authored
      in the external facing API for pkg/migration.
      
      The migration package concerns itself with migrations between cluster
      versions, so we should have our API reflect as much. The proto changes
      are safe, we haven't had a major release with the previous proto
      definitions.
      
      Release note: None
      3783e636
    • irfan sharif's avatar
      clusterversion: rename proto pkg name for cleaner import path · e8c3b906
      irfan sharif authored
      Release note: None
      e8c3b906
    • irfan sharif's avatar
      migration,server: plumb in initial version to the migration manager · 93812e96
      irfan sharif authored
      It'll let us return early, and makes the manager "more functional" in
      its behavior. We should also be using the ClusterVersion type when
      talking about migrations, so we make that change here.
      
      Release note: None
      93812e96
    • irfan sharif's avatar
      clusterversion,*: remove VersionContainsEstimatesCounter · 81070225
      irfan sharif authored
      This, and all surrounding migration code and tests, are now safe to
      remove. It mostly served as documentation, which we've moved to the
      field itself. Part of #47447. Fixes #56401.
      
      (While here, Let's also tell git that `versionkey_string.go` is a
      generated file.)
      
      Release note: None
      81070225
    • craig[bot]'s avatar
      Merge #57154 · da5dc14e
      craig[bot] authored
      57154: clusterversion: document process for introducing cluster versions r=irfansharif a=irfansharif
      
      And for deleting old ones. This stuff is endlessly confusing.
      
      Release note: None
      
      ---
      
      (I could be wrong about some of it, do scrutinize.)
      Co-authored-by: default avatarirfan sharif <[email protected]>
      da5dc14e
    • irfan sharif's avatar
      clusterversion: move version comments next to definition · a506329a
      irfan sharif authored
      Jump to symbol for a given version tag will now group together the
      intent of the version itself (instead of expecting readers to know to
      grep for the same version in the block below).
      
      Release note: None
      a506329a
    • irfan sharif's avatar
      clusterversion: document process for introducing cluster versions · d98aa66b
      irfan sharif authored
      And for deleting old ones. This stuff is endlessly confusing.
      
      Release note: None
      d98aa66b
    • craig[bot]'s avatar
      Merge #57180 · 85781f8f
      craig[bot] authored
      57180: sql: don't use b.Parallel in BenchmarkFlowSetup r=jordanlewis a=nvanbenschoten
      
      We've been using this benchmark lately as Jordan and others push to
      reduce heap allocations. It's a fairly nice middle ground between a
      microbenchmark and some kind of larger end-to-end benchmark like we have
      in `pkg/bench`.
      
      The problem I've seen with the benchmark when trying to use it is that
      its latency figures don't make sense. Each of the benchmark permutations
      show latency numbers around 15μs, which is unrealistic – we're not
      running a SQL query in 15μs. The reason for this confusion was because
      we were using `b.RunParallel`. The purpose of `b.RunParallel` (whose
      documentation is super vague) isn't to make the test run faster, it's to
      observe how operations scale across higher concurrency levels when they
      share resources and contend. Because of this, it takes the total
      benchmark time and divides it by the number of iterations which have
      been split across goroutines. So for operations which scale linearly
      with concurrency, their reported latency will be 1/GOMAXPROCS what it is
      sequentially. For operations that don't scale at all with concurrency
      (e.g. entire operation requires mutual exclusion), their reported
      latency will not change with `b.RunParallel`. For most operations,
      the reported latency will be somewhere in the middle.
      
      But this isn't really the purpose of this test, which just wants to know
      the true cost of setting up and executing a flow. Removing
      `b.RunParallel` gets us a much more realistic latency figure. With that
      change, iterations take somewhere between 80-120μs.
      
      ```
      name                                           old time/op    new time/op    delta
      FlowSetup/vectorize=false/distribute=false-16    15.4µs ±12%    83.8µs ± 1%  +444.70%  (p=0.000 n=10+9)
      FlowSetup/vectorize=true/distribute=false-16     14.9µs ± 6%    89.9µs ± 3%  +504.48%  (p=0.000 n=10+10)
      FlowSetup/vectorize=false/distribute=true-16     16.0µs ± 4%    96.8µs ± 3%  +506.15%  (p=0.000 n=10+10)
      FlowSetup/vectorize=true/distribute=true-16      17.5µs ± 5%   114.1µs ±10%  +552.23%  (p=0.000 n=10+10)
      
      name                                           old alloc/op   new alloc/op   delta
      FlowSetup/vectorize=false/distribute=true-16     33.3kB ± 0%    33.3kB ± 0%      ~     (p=0.403 n=10+10)
      FlowSetup/vectorize=true/distribute=false-16     28.1kB ± 0%    28.2kB ± 0%    +0.25%  (p=0.000 n=9+8)
      FlowSetup/vectorize=true/distribute=true-16      31.6kB ± 0%    31.8kB ± 0%    +0.82%  (p=0.000 n=9+10)
      FlowSetup/vectorize=false/distribute=false-16    29.8kB ± 0%    30.1kB ± 0%    +1.10%  (p=0.000 n=8+8)
      
      name                                           old allocs/op  new allocs/op  delta
      FlowSetup/vectorize=false/distribute=false-16       214 ± 1%       213 ± 0%    -0.56%  (p=0.000 n=10+6)
      FlowSetup/vectorize=false/distribute=true-16        244 ± 0%       243 ± 0%    -0.49%  (p=0.000 n=10+6)
      FlowSetup/vectorize=true/distribute=false-16        233 ± 1%       232 ± 1%    -0.39%  (p=0.014 n=10+10)
      FlowSetup/vectorize=true/distribute=true-16         263 ± 0%       263 ± 0%      ~     (p=0.500 n=10+7)
      ```
      Co-authored-by: default avatarNathan VanBenschoten <[email protected]>
      85781f8f
    • craig[bot]'s avatar
      Merge #56628 · 25698c5e
      craig[bot] authored
      56628: sql: create region type enums for multiregion databases  r=otan a=arulajmani
      
      This patch introduces a new kind of type descriptor called
      `TypeDescriptor_MULTIREGION_ENUM`. This type is similar to a regular
      enum, except:
      - We don't create an array type alias underneath the hood.
      - The user isn't allowed to directly alter/drop this type of enum.
      - The parentID is not validated (currently). This is because this enum
      may be created along with the database. In such scenarios, we can't
      get the db desc for validation as it was created in the same txn. This
      can (and should) change once the descGetter is backed by the
      desc.Collection instead of reading directly from KV.
      
      In an upcoming patch, we plan to support dropping values
      (removing a region) from this type of enum as well.
      
      As of this patch, we create a type called `region` as part of
      `CREATE DATABASE` if `REGIONS` is specified. Currently, this type is
      created under the public schema, but this is to change in an upcoming
      patch. See #56877 for details.
      
      Release note (sql change): When creating a database with the regions
      clause specified we create a `regions` enum type underneath the hood.
      Co-authored-by: default avatararulajmani <[email protected]>
      25698c5e
    • arulajmani's avatar
      sql: create region type enums for multiregion databases · 9a52a9ca
      arulajmani authored
      This patch introduces a new kind of type descriptor called
      `TypeDescriptor_MULTIREGION_ENUM`. This type is similar to a regular
      enum, except:
      - We don't create an array type alias underneath the hood.
      - The user isn't allowed to directly alter/drop this type of enum.
      - The parentID is not validated (currently). This is because this enum
      may be created along with the database. In such scenarios, we can't
      get the db desc for validation as it was created in the same txn. This
      can (and should) change once the descGetter is backed by the
      desc.Collection instead of reading directly from KV.
      
      In an upcoming patch, we plan to support dropping values
      (removing a region) from this type of enum as well.
      
      As of this patch, we create a type called `region` as part of
      `CREATE DATABASE` if `REGIONS` is specified. Currently, this type is
      created under the public schema, but this is to change in an upcoming
      patch. See #56877 for details.
      
      Release note (sql change): When creating a database with the regions
      clause specified we create a `regions` enum type underneath the hood.
      9a52a9ca
    • craig[bot]'s avatar
      Merge #57139 · 36121516
      craig[bot] authored
      57139: sql: fix: allow nulls in materialized views r=otan a=rafiss
      
      fixes #57108 
      
      Release note (bug fix): Creating a materialized view that references a
      column with a NULL value no longer results in an error.
      Co-authored-by: default avatarRafi Shamim <[email protected]>
      36121516
  5. 26 Nov, 2020 11 commits
    • craig[bot]'s avatar
      Merge #57143 · 425d8ed0
      craig[bot] authored
      57143: docs: edit release note for schema change feature flag r=otan a=angelapwen
      
      Release note (sql change): This is an empty commit meant to correct
      a mistake in a merged release note in #57040. The original release
      note indicates that a database administrator should toggle this
      feature flag on and off using
      SET CLUSTER SETTING feature.schemachange.enabled, but there
      should be a '_' character so that the cluster setting would be:
      SET CLUSTER SETTING feature.schema_change.enabled.
      
      Below is the full original release note, with the typo fixed.
      
      Release note (sql change): Adds a feature flag via cluster setting
      for all schema change-related features. If a user attempts
      to use these features while they are disabled, an error indicating
      that the database administrator has disabled the feature is
      surfaced.
      
      Example usage for the database administrator:
      SET CLUSTER SETTING feature.schema_change.enabled = FALSE;
      SET CLUSTER SETTING feature.schema_change.enabled = TRUE;
      Co-authored-by: default avatarangelapwen <[email protected]>
      425d8ed0
    • craig[bot]'s avatar
      Merge #57142 · 40f7f9c1
      craig[bot] authored
      57142: sql: feature flag for ALTER TABLE/INDEX SPLIT/UNSPLIT r=otan a=angelapwen
      
      Release note (sql change): Gates the ALTER TABLE...SPLIT/UNSPLIT
      and ALTER INDEX...SPLIT/UNSPLIT commands under the schema change
      feature flag added previously. If a user attempts to use these
      features while they are disabled, an error indicating that the
      system administrator has disabled the feature is surfaced.
      
      Example usage for the database administrator:
      SET CLUSTER SETTING feature.schema_change.enabled = FALSE
      SET CLUSTER SETTING feature.schema_change.enabled = TRUE
      Co-authored-by: default avatarangelapwen <[email protected]>
      40f7f9c1
    • Oliver Tan's avatar
      delegate: implement SHOW REGION FROM DATABASE · d9954e16
      Oliver Tan authored
      Looks like SHOW REGION FROM DATABASE has an optional db argument
      throughout the spec, so why not implement both it ways.
      
      Release note (sql change): Implement SHOW REGION FROM DATABASE and SHOW
      REGION FROM DATABASE db, which shows all regions for the given database,
      as well as whether that region is the primary region.
      d9954e16
    • craig[bot]'s avatar
      Merge #57151 · d1aee54f
      craig[bot] authored
      57151: descpb: rework RegionConfig to be nilable r=arulajmani a=otan
      
      * Moved RegionConfig to be a property of the DatabaseDescriptor.
      * Made RegionConfig nil-able.
      
      Release note: None
      Co-authored-by: default avatarOliver Tan <[email protected]>
      d1aee54f
    • craig[bot]'s avatar
      Merge #57094 · 00970f51
      craig[bot] authored
      57094: bazel: pin Go 1.15.5 r=irfansharif a=otan
      
      Resolves #56060 
      
      Release note: None
      Co-authored-by: default avatarOliver Tan <[email protected]>
      00970f51
    • Oliver Tan's avatar
      descpb: rework RegionConfig to be nilable · f1082653
      Oliver Tan authored
      * Moved RegionConfig to be a property of the DatabaseDescriptor.
      * Made RegionConfig nil-able.
      
      Release note: None
      f1082653
    • Nathan VanBenschoten's avatar
      sql: don't use b.Parallel in BenchmarkFlowSetup · 81c6639a
      Nathan VanBenschoten authored
      We've been using this benchmark lately as Jordan and others push to
      reduce heap allocations. It's a fairly nice middle ground between a
      microbenchmark and some kind of larger end-to-end benchmark like we have
      in `pkg/bench`.
      
      The problem I've seen with the benchmark when trying to use it is that
      its latency figures don't make sense. Each of the benchmark permutations
      show latency numbers around 15μs, which is unrealistic – we're not
      running a SQL query in 15μs. The reason for this confusion was because
      we were using `b.RunParallel`. The purpose of `b.RunParallel` (whose
      documentation is super vague) isn't to make the test run faster, it's to
      observe how operations scale across higher concurrency levels when they
      share resources and contend. Because of this, it takes the total
      benchmark time and divides it by the number of iterations which have
      been split across goroutines. So for operations which scale linearly
      with concurrency, their reported latency will be 1/GOMAXPROCS what it is
      sequentially. For operations that don't scale at all with concurrency
      (e.g. entire operation requires mutual exclusion), their reported
      latency will not change with `b.RunParallel`. For most operations,
      the reported latency will be somewhere in the middle.
      
      But this isn't really the purpose of this test, which just wants to know
      the true cost of setting up and executing a flow. Removing
      `b.RunParallel` gets us a much more realistic latency figure. With that
      change, iterations take somewhere between 80-120μs.
      
      ```
      name                                           old time/op    new time/op    delta
      FlowSetup/vectorize=false/distribute=false-16    15.4µs ±12%    83.8µs ± 1%  +444.70%  (p=0.000 n=10+9)
      FlowSetup/vectorize=true/distribute=false-16     14.9µs ± 6%    89.9µs ± 3%  +504.48%  (p=0.000 n=10+10)
      FlowSetup/vectorize=false/distribute=true-16     16.0µs ± 4%    96.8µs ± 3%  +506.15%  (p=0.000 n=10+10)
      FlowSetup/vectorize=true/distribute=true-16      17.5µs ± 5%   114.1µs ±10%  +552.23%  (p=0.000 n=10+10)
      
      name                                           old alloc/op   new alloc/op   delta
      FlowSetup/vectorize=false/distribute=true-16     33.3kB ± 0%    33.3kB ± 0%      ~     (p=0.403 n=10+10)
      FlowSetup/vectorize=true/distribute=false-16     28.1kB ± 0%    28.2kB ± 0%    +0.25%  (p=0.000 n=9+8)
      FlowSetup/vectorize=true/distribute=true-16      31.6kB ± 0%    31.8kB ± 0%    +0.82%  (p=0.000 n=9+10)
      FlowSetup/vectorize=false/distribute=false-16    29.8kB ± 0%    30.1kB ± 0%    +1.10%  (p=0.000 n=8+8)
      
      name                                           old allocs/op  new allocs/op  delta
      FlowSetup/vectorize=false/distribute=false-16       214 ± 1%       213 ± 0%    -0.56%  (p=0.000 n=10+6)
      FlowSetup/vectorize=false/distribute=true-16        244 ± 0%       243 ± 0%    -0.49%  (p=0.000 n=10+6)
      FlowSetup/vectorize=true/distribute=false-16        233 ± 1%       232 ± 1%    -0.39%  (p=0.014 n=10+10)
      FlowSetup/vectorize=true/distribute=true-16         263 ± 0%       263 ± 0%      ~     (p=0.500 n=10+7)
      ```
      81c6639a
    • craig[bot]'s avatar
      Merge #57178 · 87cf74e0
      craig[bot] authored
      57178: Revert "util/log: more misc cleanups" r=irfansharif a=irfansharif
      
      This reverts #57000, which introduced a race to crdb. Was able to
      reproduce it using
      
        make stressrace PKG=./pkg/sql/pgwire TESTS=TestConnResultsBufferSize
      
      The buggy commit in question appears to be ffd7f683.
      Touches #57162 and #57161.
      
      Release note: None
      Co-authored-by: default avatarirfan sharif <[email protected]>
      87cf74e0
    • irfan sharif's avatar
      Revert "util/log: more misc cleanups" · ee22b185
      irfan sharif authored
      This reverts #57000, which introduced a race to crdb. Was able to
      reproduce it using
      
        make stressrace PKG=./pkg/sql/pgwire TESTS=TestConnResultsBufferSize
      
      The buggy commit in question appears to be ffd7f683.
      
      Release note: None
      ee22b185
    • craig[bot]'s avatar
      Merge #57019 · e31d60c1
      craig[bot] authored
      57019: ui: update link on sessions page documentation r=koorosh a=koorosh
      
      Replace link for sessions page documentation with correct one.
      
      Release note: none
      Co-authored-by: default avatarAndrii Vorobiov <[email protected]>
      e31d60c1
    • Nathan VanBenschoten's avatar
      roachtest: introduce new tpce/c=5000/nodes=3 roachtest · 70d85583
      Nathan VanBenschoten authored
      This commit introduces a new roachtest called `tpce/c=5000/nodes=3`, which
      uses a Dockerized version of https://github.com/cockroachlabs/tpc-e to run
      an instance of the TPC-E benchmark. The setup is not spec-compliant because
      it uses a merged Driver and Tier A process, allowing for easier deployment.
      
      The benchmark first IMPORTs a 5000 customer dataset from Google Storage and
      then runs the workload for an hour. The workload spits out a report at the
      end of the run that looks like:
      ```
      _Transaction_______|__pass_mix___pass_lat__|______txns______txns/s_______mix__________avg__________p50__________p90__________p99_________pMax
       BrokerVolume      |     false       true  |      2909        4.85    4.956%     18.345ms      8.689ms     52.051ms     63.575ms     77.651ms
       CustomerPosition  |     false       true  |      7726       12.88   13.163%     13.311ms      7.941ms     35.595ms     55.825ms     74.606ms
       DataMaintenance   |       n/a        n/a  |        10        0.02    0.017%      17.94ms     19.147ms     25.083ms     27.172ms     27.172ms
       MarketFeed        |     false       true  |       534        0.89    0.910%    145.943ms    140.083ms     179.87ms     245.24ms    265.666ms
       MarketWatch       |     false       true  |     10679       17.80   18.194%     12.509ms      7.404ms     32.858ms     45.705ms     58.103ms
       SecurityDetail    |     false       true  |      8301       13.84   14.142%     28.778ms      8.021ms     97.732ms     114.69ms    148.745ms
       TradeLookup       |     false       true  |      4753        7.92    8.098%     10.749ms      7.862ms      22.47ms     35.241ms     47.571ms
       TradeOrder        |     false       true  |      5999       10.00   10.220%     35.219ms     17.499ms    103.776ms     134.59ms    172.817ms
       TradeResult       |     false       true  |      5341        8.90    9.099%     47.782ms     27.721ms     114.69ms    140.083ms    174.554ms
       TradeStatus       |     false       true  |     11265       18.77   19.192%     19.358ms      6.123ms     71.681ms      89.32ms    115.843ms
       TradeUpdate       |     false       true  |      1189        1.98    2.026%     18.025ms     15.062ms      34.89ms     46.629ms     55.825ms
      
      Customers     :     5000
      Nominal  tpsE :    10.00
      Measured tpsE :     8.90
      tpsE fraction :   89.02% (80% - 100% passing)
      
      Reported tpsE :     8.90
      ```
      
      This report is not yet hooked up to roachperf.
      70d85583
  6. 25 Nov, 2020 10 commits
    • angelapwen's avatar
      docs: edit release note for schema change feature flag · b97c9fbb
      angelapwen authored
      Release note (sql change): This is an empty commit meant to correct
      a mistake in a merged release note in #57040. The original release
      note indicates that a database administrator should toggle this
      feature flag on and off using
      SET CLUSTER SETTING feature.schemachange.enabled, but there
      should be a '_' character so that the cluster setting would be:
      SET CLUSTER SETTING feature.schema_change.enabled.
      
      Below is the full original release note, with the typo fixed.
      
      Release note (sql change): Adds a feature flag via cluster setting
      for all schema change-related features. If a user attempts
      to use these features while they are disabled, an error indicating
      that the database administrator has disabled the feature is
      surfaced.
      
      Example usage for the database administrator:
      SET CLUSTER SETTING feature.schema_change.enabled = FALSE;
      SET CLUSTER SETTING feature.schema_change.enabled = TRUE;
      b97c9fbb
    • craig[bot]'s avatar
      Merge #56975 · fe5da5c2
      craig[bot] authored
      56975: sql: fix for column type for pg_constraint.confkey to smallint[] r=rafiss a=mnovelodou
      
      Previously, type of pg_constraint.confkey reported bigint[], but
      postgresql reports smallint[], to address this, this patch changes
      the array data type for this column from types.Int to types.Int2.
      This address the data type for conkey column as well.
      
      Fixes #44793
      
      Release note (sql change): Changed pg_constraint column types
      for confkey and conkey to smallint[] to improve compatibility with
      Postgres
      Co-authored-by: default avatarMiguelNovelo <[email protected]>
      fe5da5c2
    • Rafi Shamim's avatar
      sql: allow nulls in materialized views · 0d8f7d4d
      Rafi Shamim authored
      This also matches the Postgres behavior, which reports the columns of a
      materialized view as nullable, even if the underlying columns are NOT
      NULL.
      
      Release note (bug fix): Creating a materialized view that references a
      column with a NULL value no longer results in an error.
      0d8f7d4d
    • craig[bot]'s avatar
      Merge #57086 #57105 · 75df73eb
      craig[bot] authored
      57086: roachtest: Use encryption-at-rest in some random clearrange runs r=itsbilal a=itsbilal
      
      Currently, we always run the `clearrange/*` roachtests with
      encryption-at-rest disabled. It's valuable to sometimes
      run it with encryption-at-rest enabled, as that could surface
      bugs in that functionality. A recent bug in pebble
      would have been uncovered earlier if we were running this
      roachtest with encryption enabled.
      
      This change updates the clearrange roachtest to use encryption
      on some runs.
      
      Release note: None.
      
      57105: descpb: add Region as a type r=ajstorm a=otan
      
      For the sake of code readability, make region its own type in the
      descpb package and change the regions and primary_region field to be
      cast to this type.
      
      Release note: None
      Co-authored-by: default avatarBilal Akhtar <[email protected]>
      Co-authored-by: default avatarOliver Tan <[email protected]>
      75df73eb
    • craig[bot]'s avatar
      Merge #57000 · 52c79da1
      craig[bot] authored
      57000: util/log: more misc cleanups r=itsbilal a=knz
      
      Helps towards #51987
      See individual commits for details.
      Co-authored-by: default avatarRaphael 'kena' Poss <[email protected]>
      52c79da1
    • angelapwen's avatar
      sql: feature flag for ALTER TABLE/INDEX SPLIT/UNSPLIT · 3dacfef8
      angelapwen authored
      Release note (sql change): Gates the ALTER TABLE...SPLIT/UNSPLIT
      and ALTER INDEX...SPLIT/UNSPLIT commands under the schema change
      feature flag added previously. If a user attempts to use these
      features while they are disabled, an error indicating that the
      system administrator has disabled the feature is surfaced.
      
      Example usage for the database administrator:
      SET CLUSTER SETTING feature.schema_change.enabled = FALSE
      SET CLUSTER SETTING feature.schema_change.enabled = TRUE
      3dacfef8
    • Raphael 'kena' Poss's avatar
      util/log: new sub package 'logcrash' · 2c661489
      Raphael 'kena' Poss authored
      Previously, all the crash reporting code was inside the 'log' package
      directly. This forced 'log' to import the cluster setting machinery,
      and the test files to import the entire code base to instantiate test
      servers. As a result, re-building running unit tests was fairly long.
      
      This patch alleviates the situation by moving the crash reporting
      machinery into its own sub-package 'logcrash'.
      
      Release note: None
      2c661489
    • Raphael 'kena' Poss's avatar
      util/log: share some common properties across sinks · 552e2a77
      Raphael 'kena' Poss authored
      The parameters "threshold" and "formatter" are actually defined for
      all sinks. So they do not need to live "inside" the `logSink`.
      
      This patch extracts them to the `sinkInfo` which further simplifies
      the code.
      
      Release note: None
      552e2a77
    • Raphael 'kena' Poss's avatar
      util/log: make the entry counter a property of the sink · ffd7f683
      Raphael 'kena' Poss authored
      Prior to this patch, whether to number log entries (a feature used for
      audit log) was defined per channel.
      
      Foreseeing a future where the same sink (e.g. a single file) is used
      by multiple channels, we want to make sure that the same counter value
      is not reused for multiple entries written to the same sink. So the
      numbering is really a property of the sink, not the channel.
      
      This patch effects the change.
      
      Release note: None
      ffd7f683
    • Oliver Tan's avatar
      descpb: add Region as a type · 00755c08
      Oliver Tan authored
      For the sake of code readability, make region its own type in the
      descpb package and change the regions and primary_region field to be
      cast to this type.
      
      Release note: None
      00755c08