Using the Plaid /transactions/sync API endpoint is a good approach to retrieve all historical transaction data for an account. When you use the API with cursor set as an empty string (""), you are effectively starting from the beginning of the transaction history. Plaid will return the initial batch of transactions and provide a next_cursor value, which you should use in subsequent requests to get the next batch of transactions. This process should be repeated until next_cursor is null, indicating there are no more transactions to fetch.
To handle synchronization effectively, you should process the batches in the order they are received, and always use the next_cursor from the current response to fetch the next batch. This way, you maintain the integrity and sequence of the transaction data according to how Plaid provides it.
Remember to handle edge cases, like re-synchronization if a batch fails or if there is a long delay between fetches, to ensure your local copy of the transactions remains accurate and up-to-date.
Thanks a lot for the suggestion! it was indeed much simpler to just modify the JSON Employee Vector Search and the other vector index Results
I implemented a solution that builds upon your approach while adding some additional robustness to handle different types of empty values in my Neo4j vector search results.
def filter_empty_connections(results):
"""
Filter out empty connections from vector search results across all data types
(Employee, Work, Project, WorkAllocation, WorkScheduleRule).
"""
if not results or "results" not in results:
return results
def is_empty_value(val):
"""Check if a value is considered empty."""
if isinstance(val, list):
# Check if list is empty or contains only empty/null values
return len(val) == 0 or all(is_empty_value(item) for item in val)
if isinstance(val, dict):
# Check if dict is empty or contains only empty/null values
return len(val) == 0 or all(is_empty_value(v) for v in val.values())
return val is None or val == ""
def filter_single_item(item):
"""Filter empty connections from a single item's connections array."""
if "connections" not in item:
return item
filtered_connections = []
for conn in item["connections"]:
# Skip the connection if all its non-type fields are empty
has_non_empty_value = False
for key, val in conn.items():
if key != "type" and not is_empty_value(val):
has_non_empty_value = True
break
if has_non_empty_value:
filtered_connections.append(conn)
item["connections"] = filtered_connections
return item
filtered_items = [
filter_single_item(item)
for item in results["results"]
]
return {"results": filtered_items}
Then I integrated it into my index functions like this:
def employee_details_index(query, query_embedding, n_results=50):
# ... existing query execution code ...
structured_results = []
for row in results:
employee_data = {
"employeeName": row["employeeName"],
"score": row["score"],
"employee": json.loads(row["employeeJson"]),
"connections": json.loads(row["connectionsJson"])
}
structured_results.append(employee_data)
# Apply filtering before returning
filtered_results = filter_empty_connections({"results": structured_results})
return filtered_results
This approach successfully removed empty connections like "education": [] and "has_unavailability": [] from the results while keeping the connection entries that had actual data.
Thank you again for pointing me in the right direction! This solution worked perfectly for my use case.
The solution to this is to change tree to list, this has changed in Odoo-18
Yes, you could use Content Type Gallery to reuse content types in the SharePoint online.
Content types created in the Content Type Gallery, when published, will be available to all sites and libraries in your SharePoint tenant. And it supports all column types.
Reference: https://learn.microsoft.com/en-us/microsoft-365/community/content-type-propagation
do you solve the problem?
I've the same problem on "mouseHover" action with Safari v17.4.
element = parent.findElement(elementLocator);
Actions actions = new Actions(driverManager.driver);
actions.moveToElement(element).pause(500).build().perform();
This code works on Safari v16.0.
I have changed the instance type with the help of AWS Lambda and EventBridge scheduler (Cron job).
The only cons is its downtime of around 30 sec.
I do not think you can do it in that way. Instead I can suggest my project https://github.com/reagento/dishka/ which can be used in different ways (like without fastapi or directly requesting dependency)
I think that is not good idea for handle request by using Executor service. Because Embedded tomcat which is included spring-boot-starter-web dependency can handle thread pool itself. You can just configure on application properties / yaml file.
If you want to use multiple thread, use or test another use cases for example processing huge file (at least more than 2GB).
For handle requests asynchronously, I will recommend spring-boot-flux project.
If you're using Expo SDK 52, the issue is likely related to that. I recently updated from SDK 51 to 52 and experienced the same error. This update also involved upgrading the react-native-view-shot library from version 3.8.0 to 4.0.0.
The Solution is to turn off React Strictmode
Try adding this inside the execute of command and inside query builder to force PHP/Symfony to use Asia/Tokyo timezone.
date_default_timezone_set("Asia/Tokyo");
Also on GitLab you need to setup your token as a Developer in order to clone the repository. The Guest role is not sufficient and will lead to a 403 error even if you have the read_repository right.
this command doesn't trigger the machine to start measuring you say? you must use
Use PySerial Library: If you are using Python, make sure you use the pyserial library to send commands over the serial port. Here is an example of Python code to send commands using pyserial:
import serial
ser = serial.Serial(
port='COM3', #use path
baudrate=9600, #use baudrate
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS,
timeout=1
)
command = b'\x16\x16\x01\x30\x30\x02\x53\x54\x03\x42\x43\x43'
ser.write(command)
ser.close()
If it doesn't work, maybe there is an error in your BCC code or can I see your BCC (Block Check Character) code?
After opening a ticket to Jira support, here's what they suggested and it did work out:
summary ~ "\"BE:\""
I've tried what you've done (and commit the new version of docker). However, I get this output:
sudo docker run --gpus all -it --rm tensorflow/tensorflow:latest-gpu python -c "import tensorflow as tf; print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))"
2024-11-22 08:20:55.695121: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1732263655.707089 1 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1732263655.710704 1 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-11-22 08:20:55.722395: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-11-22 08:20:57.119693: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_COMPAT_NOT_SUPPORTED_ON_DEVICE: forward compatibility was attempted on non supported HW
2024-11-22 08:20:57.119716: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:137] retrieving CUDA diagnostic information for host: 81f8d81af78d
2024-11-22 08:20:57.119720: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:144] hostname: 81f8d81af78d
2024-11-22 08:20:57.119789: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:168] libcuda reported version is: 545.23.6
2024-11-22 08:20:57.119804: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:172] kernel reported version is: 470.256.2
2024-11-22 08:20:57.119808: E external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:262] kernel version 470.256.2 does not match DSO version 545.23.6 -- cannot find working devices in this configuration
Num GPUs Available: 0
I'm missing something? thanks
it should use (") in your path to your script. for example :
python "C:\Users\Acer\palm-recognizition\src\predict.py"
<div class="container-fluid mt-5">
<div class="table-responsive shadow">
<table class="table table-bordereless mb-0">
<thead class="table-head">
<tr>
@for (head of thead; track head.displayName) {
<th [class]="head.thClass" [ngClass]="{ 'pointer': head?.sortable }" [style]="head.thStyle">
@switch (head.elementType)
{
<!-- #region Text -->
@case ('text') {
<span [id]="head?.id" (click)="head?.event == 'click'? eventTriggered($event, head.displayName): '' "
[class]="head?.class" [style]="head?.style">
@if (head?.sortable) {
<i class="fa-solid fa-sort"></i>
}
{{head.displayName}}
</span>
}
<!-- #endregion -->
<!-- #region innerHTML -->
@case ('innerHTML') {
<span (click)="head?.event === 'click' ? eventTriggered($event,head.displayName) : ''"
[innerHTML]="head.displayName" [id]="head?.id" [class]="head?.class" [style]="head?.style">
</span>
}
<!-- #endregion -->
<!-- #region input -->
@case ('input') {
<input [type]="head?.inputType" [id]="head?.id" [class]="head?.class" [style]="head?.style"
(click)="head?.event === 'click' ? eventTriggered($event,head.displayName,'click') : ''"
(change)="head?.event === 'change' ? eventTriggered($event,head.displayName,'change') : ''">
<label [for]="head?.id" [class]="head?.class" [innerHTML]="head?.inputLabel"></label>
}
<!-- #endregion -->
<!-- #region icon -->
@case ('icon') {
<button class="btn border-0" [id]="head?.id" [style]="head?.style"
(click)="head.event === 'click' ? eventTriggered($event,head.displayName):''">
<i [class]="head?.iconClass"></i>
</button>
}
@case ('iconWithText') {
<button class="btn border-0" [id]="head?.id" [style]="head?.style"
(click)="head.event === 'click' ? eventTriggered($event,head.displayName):''">
<i [class]="head?.iconClass"></i>
</button>
<span>{{head.iconText}}</span>
}
<!-- #endregion -->
}
</th>
}
</tr>
</thead>
<tbody>
@for (data of dataArr | paginate:{itemsPerPage: pagination.pageSize , currentPage: pagination.page , totalItems:
pagination.totalItems}; track data?.id;) {
<tr>
@for (body of tbody; track $index; ) {
<td [style]="body?.tdStyle" [class]="body.tdClass"
[ngClass]="{'pointer':body?.routerLink || body?.clickFunction}"
(click)="(body?.clickFunction && body.parameter) ? eventTriggered($event,data[body.parameter],'click') : ''"
[routerLink]="body?.routerLink">
@switch (body.elementType) {
<!-- #region Text -->
@case ('text') {
<span [class]="body?.class" [style]="body?.style" [id]="body?.id"
[pTooltip]="data[body.attrName]?.length >=40 ? data[body.attrName] : ''" tooltipPosition="top">
{{data[body.attrName]?.slice(0,40)}}
</span>
@if (data[body.attrName]?.length >= 40) {
<span>...</span>
}
}
<!-- #endregion -->
<!-- #region Serial No -->
@case ('serialNo') {
<span [class]="body?.class" [style]="body?.style" [id]="body?.id">
{{($index + 1 ) }}
</span>
}
<!-- #endregion -->
<!-- #region input -->
@case ('input') {
<input [type]="body?.inputType" [class]="body?.inputClass"
[id]="body.inputId ? 'id-'+data[body.inputId] : 'id'+body.attrName"
(click)="body?.event === 'click' ? eventTriggered($event,(body.parameter ? data[body.parameter]:data[body.attrName]),body.action ? body.action : 'click') : ''"
(change)="body?.event === 'change' ? eventTriggered($event,(body.parameter ? data[body.parameter] : data[body.attrName]),body.action ? body.action : 'change'):''">
@if (body.inputLabel) {
<label [for]="body.inputId ? body.inputId : 'id'+body.attrName">{{body.inputLabel}}</label>
}
}
<!-- #endregion -->
<!-- #region innerHTML -->
@case ('innerHTML') {
<span [innerHTML]="body.attrName ? data[body.attrName] : body?.innerHTML" [class]="body?.class"
[style]="body?.style" [id]="body?.id"
(click)="body?.event === 'click' ? eventTriggered($event,(body.parameter ? data[body.parameter] : data[body.attrName])) : ''">
</span>
}
<!-- #endregion -->
<!-- #region Dropdown -->
@case ('dropdown') {
<div class="dropdown">
<button class="btn p-0 px-3 border-0" data-bs-toggle="dropdown">
<i class="fa-solid fa-ellipsis-vertical"></i>
</button>
<ul class="dropdown-menu dropdown-overflow">
@for (dd of body?.dropdownData; track dd.content) {
<li>
<span class="dropdown-item pointer" [attr.data-bs-toggle]="dd.modelId ? 'modal' : ''"
[attr.data-bs-target]="dd.modelId ? '#'+dd.modelId : ''"
[innerHTML]="(dd.icon ? iconArr[dd['icon']] : '') +' '+dd?.content"
(click)="eventTriggered($event,data[dd.parameter],dd?.icon+(body?.attrName ? '-' +body.attrName : ''))"></span>
</li>
}
</ul>
</div>
}
<!-- #endregion -->
<!-- #region Icon -->
@case ('icon') {
<button class="btn border-0" [id]="body?.id" [style]="body?.style"
[attr.data-bs-toggle]="body.modelId ? 'modal' : ''"
[attr.data-bs-targrt]="body.modelId ? '#'+body.modelId : ''"
(click)="body?.event === 'click' ? eventTriggered($event,body.parameter && data[body.parameter],'click '+body.attrName) : ''"
(dblclick)="body?.event == 'dblclick' ? eventTriggered($event,body.parameter && data[body.parameter],'dblclick'+body.attrName) : ''">
<i [class]="body?.iconClass"></i>
</button>
}
@case ('iconWithText') {
<button class="btn border-0" [id]="body?.id" [style]="body?.style"
[attr.data-bs-toggle]="body.modelId ? 'modal' : ''"
[attr.data-bs-targrt]="body.modelId ? '#'+body.modelId : ''"
(click)="body?.event === 'click' ? eventTriggered($event,body.parameter && data[body.parameter],'click'+body.attrName) : ''"
(dblclick)="body?.event == 'dblclick' ? eventTriggered($event,body.parameter && data[body.parameter],'dblclick'+body.attrName) : ''">
<i [class]="body?.iconClass"></i>
</button>
<span>{{body?.iconText}}</span>
}
<!-- #endregion -->
<!-- #region Select -->
@case ('select') {
<select [class]="body?.class" [style]="body?.style"
[id]="body?.id+'_'+(body.parameter ? data[body.parameter] : '')"
(change)="eventTriggered($event,body.parameter ? data[body.parameter] : '','change')">
@for (option of body?.optionArr; track option) {
<option [value]="body.optionValue ? option[body.optionValue] : option"
[selected]="body.selecterOption && body.optionValue ? data[body.selecterOption] == option[body.optionValue] : false ">
{{body.optionLabel ? option[body.optionLabel] : option}}
</option>
}
</select>
}
<!-- #endregion -->
<!-- #region Conditional -->
@case ('conditional') {
<span [class]="body?.class" [style]="body?.style">
@if (data[body.attrName] == body.condition) {
<span [innerHTML]="body?.trueStatement"></span>
}@else {
<span [innerHTML]="body?.falseStatement"></span>
}
</span>
}
<!-- #endregion -->
}
</td>
}
</tr>
}
</tbody>
</table>
</div>
<div class="table-bottom">
<div class="row align-items-center mt-3 px-3">
<div class="col-lg-3">
<span>
Displaying
{{ ( ( pagination.page - 1 ) * ( pagination.pageSize ) ) + 1 }}
to
{{ ( ( pagination.page - 1 ) * ( pagination.pageSize ) + (pagination.pageSize) >
pagination.totalItems )
? pagination.totalItems
: ( pagination.page - 1 ) * ( pagination.pageSize ) + (pagination.pageSize) }}
of {{ pagination.totalItems }}
</span>
</div>
<div class="col-lg-9 text-end">
<pagination-controls (pageChange)="changePage($event)" previousLabel="" nextLabel="">
</pagination-controls>
</div>
</div>
</div>
</div>
<!-- #region typescript -->
type Thead = {
displayName: string;
sortable?: boolean;
sortItem?: string;
thClass: string | '';
thStyle?: string;
inputClass?: string;
inputId?: string;
iconClass?: string;
class?: string;
id?: string;
style?: string;
// for elements and events
elementType: Exclude<typeElement, 'serialNo' | 'conditional' | 'dropdown'>;
iconText?: string;
inputType?: string;
inputLabel?: string;
event?: typeEvent;
action?: string;
};
type Tbody = {
attrName: string;
tdClass: string | '';
tdStyle?: string;
inputClass?: string;
inputId?: string;
iconClass?: string;
class?: string;
id?: string;
style?: string;
innerHTML?: string;
//for elements and events
elementType: typeElement;
iconText?: string;
inputType?: string;
inputLabel?: string;
event?: typeEvent;
modelId?: string;
//for routers and parameters
routerLink?: string;
clickFunction?: string;
parameter?: string;
action?: string;
//for dropdown
dropdownData?: typeDropdown[];
//for conditional
condition?: unknown;
trueStatement?: unknown;
falseStatement?: unknown;
//for select
optionArr?: any[];
optionValue?: string;
optionLabel?: string;
selecterOption?: string;
};
type typeElement =
| 'input'
| 'icon'
| 'text'
| 'serialNo'
| 'iconWithText'
| 'innerHTML'
| 'dropdown'
| 'conditional'
| 'select';
type typeEvent = 'click' | 'change' | 'input' | 'dblclick';
type TriggeredEvent = { event: Event; action?: string; parameter: string };
type typeDropdown = {
icon?: string;
content: string;
parameter: string;
modelId?: string;
};
type Pagination = {
totalItems: number;
page: number;
pageSize: number;
optimization: boolean;
getPagination?: boolean;
};
type PaginationOutput = {
page: number;
pageSize: number;
};
export { Thead, Tbody, TriggeredEvent, Pagination, PaginationOutput };
this is a custom table i created how but the $index serial number is not working how do i fix it
Rather than installing SSH to access semaphore you could expose port 3000 and use "Integrations".
https://semaphoreui.com/api-docs/#/project/post_project__project_id__integrations
Erase in odoo doesn'T really support the authentication you want. This code just isn't there.
The ode has its own authentication option. When you set "user" it turns on. It consists in the fact that you must first receive a cookie. And then in the postman already pass these cookies in the cookie header. I am sending examples of authentication and subsequent requests from my mobile app. I think it will be clear to you and you will transfer it to the postman.
next how use this cookie
So you can'T get the data in one request. First you need to get a cookie(token). And then make a request with this cookie. Also note that cookies also come in the set-cookie header By the way, there is another alternative option to create an API KEY in the user's settings and use it
For case two you can also set margin-left:auto for inner div in order to move it to the right. While text inside the inner div is already aligned to right. so this hacks works like a charm
I got something to work by just copying the values of a static array and placing them into the dynamic one like this,
void f(int r, int c, int **grid);
int main() {
int r = 3, c = 5;
int values[][5] = {
{0, 1, 2, 3, 4},
{1, 2, 4, 2, 1},
{3, 3, 3, 3, 3}
};
int **test = malloc(r*sizeof(int*));
for (int i = 0; i < r; i++) {
test[i] = malloc(c*sizeof(int));
for (int j = 0; j < c; j++) {test[i][j] = values[i][j];}
}
f(r, c, test);
return 0;
}
However I would need to specify the number of columns of the static array every time I do testing. Is there a shorter way to do this using Compound Literals and without creating a values variable? I am using GCC 6.3.0 compiler for C.
Found it, was a very dummy mistake. I did not install pykeops before running the code. Weird enough, if pykeops is not installed then keops kernels in gpytorch.kernels.keops fallback to non-keops kernels. The fallback in my case was happening silently, with no warning (somehow the warning generated here was suppressed).
I figured this out by inspecting source code. IMHO, I think that gpytorch.kernels.keops should raise some exception when it's used whitout pykeops installed.
I have achieved using following dependencies
audio_video_progress_bar: ^2.0.3
just_audio: ^0.9.31
audio_service: ^0.18.4
on_audio_query: ^2.8.1
Full Guide URL: link
Ok But if I have to do
drop database test;
create database test owner testuser; -- HERE I have to connect to test????? create extension some_extension;
How to connect here?
I believe that you can use Spring cloud config project. Which is centerlized configuration system that will handle and store all neccessary configuration for your microservices.
Then, All microservices interconnected into config server and use RefreshScope annotation. For example basic tutorial link attached here
A little late but here is my approach:
Maintained a separate copy of the function's original logic, referred to as ~/.config/fish/functions/__fish_move_last_copy.fish (the basepath is a conventional place for custom fish functions, also specified inside $FISH_FUNCTION_PATH).
Also in ~/.config/fish/functions, write your new __fish_move_last.fish, import __fish_move_last_copy.fish inside it then passed all of __fish_move_last.fish's arguments to __fish_move_last_copy.fish.
Add a cronjob (or systemd timer, whatever suits you) to copy /usr/share/fish/functions/__fish_move_last.fish to ~/.config/fish/functions/__fish_move_last_copy.fish every start of the week (too frequent? month is also good as well).
This is how I managed to overwrite many of Fish's default function without the fear of original function gets update later
Use mode='markers+lines' and additional attribute marker=dict(size=sizes1)
In my case I mark only value contain (2,6,9)
sizes1 = [10 if y in (2,6,9) else 0 for y in seed1_y_data]
trace1=go.Scatter(x=x_data,y=seed1_y_data,mode='markers+lines',name='Team A'
,marker=dict(size=sizes1))
data=[trace1]
layout=go.Layout(title='This is line chart',
xaxis={'title':'this x axis'},
yaxis={'title':'this y axis'},
)
fig=go.Figure(data=data,layout=layout)
pyo.plot(fig,filename='line.html')
You just need to restart your Powershell and it will work.
I found the issue in the "min_prediction_length" parameter. Initially I set it same as the max_prediction_length , but after changing it to a lower value, model was working fine.
fbgfghggfhgfhgf hgf fgh gfhj 6yu65e jkjk jhk ddtrtyr yjhg bnd ss yuy
It seems that while you’ve increased the timeout in the backend, the Application Gateway’s idleTimeoutInMinutes setting, which defaults to 4 minutes, might still be limiting the connection. Ideally, you should increase this timeout as well.
az network public-ip update
--ids /subscriptions//resourceGroups/<resource_group>/providers/Microsoft.Network/publicIPAddresses/
--idle-timeout 30
The issue is Driver Node overloading. To determine the exact reason, check driver logs to identify specific bottlenecks, such as CPU starvation or task queuing. This can indicate whether the issue is CPU, I/O, or something else.
Please share Driver logs and let me know if you need any information.
just trying used this as a cloud
There is an open issue in the kotlinx.coroutines repository regarding official support for this. It also contains some solutions.
Formula:
=CHOOSEROWS(FILTER(A2:A24, B2:B24 = F4), (COUNTA(FILTER(A2:A24, B2:B24 = F4))-1))
This formula uses functions such as FILTER(to get values matching the name) and CHOOSEROWS to to get the desired outcome.
References: CHOOSEROWS, FILTER
Please make sure headerShown: true into your Stack.Screen.
Thank you for your quick responses. When I added the relevant references to my csproj file as follows, the files were published.
<Content Include="Scripts\bootstrap.bundle.js" />
<Content Include="Scripts\bootstrap.bundle.min.js" />
Did you ever get an answer to this question? It appears that the SharePoint connector code in Logic Apps does not handle non-home tenant connections but no-where in the documentation does it state that as a rather serious issue. Am I missing something here?
Of course using an image which has the required jdk as default will solve this issue. But in this case you even don't need the "jdk 'openjdk-17.0.5'" in your pipeline, because jenkins and the image don't have any other option as using this jdk.
If you really have an generic image which contains several jdk versions from which you select the proper one for your job via the tools jdk setting, you need to configure the jdks which are available in your image in the jenkins docker agent template settings. Define each available jdk as a "tool" under "node properties". To be able to select the jdks there you must define them before in the tools configuration of jenkins itself.
The "name" you define for the jdk must match the tool jdk name in your pipeline.
And over multiple dimensions: np.apply_over_axes
To implement a best approach for storing forum threads and replies in a database, it’s essential to design the database schema in a way that is scalable, efficient, and easy to manage as your forum grows. Here's how you can approach this:
Best Approach for Storing Forum Threads and Replies in a Database Database Schema Design:
You typically need to create several tables to handle forum threads and replies effectively:
Threads Table: This table will store the main thread information, such as: thread_id: Primary key (unique identifier for each thread) user_id: Foreign key linking to the user who started the thread title: The title of the thread content: Initial post content or description created_at: Timestamp of when the thread was created updated_at: Timestamp of when the thread was last updated last_post_at: Timestamp of the last post in the thread (to help with sorting threads by most recent activity) views: Count of how many times the thread has been viewed (optional but useful for analytics) Example:
sql Copy code CREATE TABLE threads ( thread_id INT PRIMARY KEY, user_id INT, title VARCHAR(255), content TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, last_post_at TIMESTAMP, views INT DEFAULT 0 ); Replies Table: This table stores all the replies made to the threads: reply_id: Primary key (unique identifier for each reply) thread_id: Foreign key linking to the thread user_id: Foreign key linking to the user who made the reply content: Content of the reply created_at: Timestamp when the reply was posted Example:
sql Copy code CREATE TABLE replies ( reply_id INT PRIMARY KEY, thread_id INT, user_id INT, content TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, FOREIGN KEY (thread_id) REFERENCES threads(thread_id), FOREIGN KEY (user_id) REFERENCES users(user_id) ); Indexes and Optimization:
Create indexes on frequently queried columns, such as thread_id and user_id in the replies table, and last_post_at in the threads table for faster retrieval of the most recent threads. You may also consider indexing created_at if you often query threads or replies by creation date. Handling Nested Replies (Optional):
If you need to support nested replies (i.e., replies to replies), you can add a parent_reply_id column to the replies table: parent_reply_id: If null, it’s a top-level reply; if populated, it’s a reply to another reply. Example:
sql Copy code ALTER TABLE replies ADD COLUMN parent_reply_id INT NULL; Optimizing for Read and Write Operations:
Forum software often experiences high read-to-write ratios. To optimize for reads (displaying threads and replies), you may use caching techniques (e.g., Redis, Memcached) to store frequently accessed data. For high write scenarios, ensure that inserts and updates are efficient. Consider using batch inserts when posting multiple replies at once. customer feedback management Integration Customer feedback is crucial for understanding user needs and improving the forum's experience. Here’s how you can incorporate customer feedback management into the system:
Add a Feedback Table: Create a separate table to store feedback from forum users. This will allow users to share their thoughts on threads, posts, or overall forum functionality.
Example:
sql Copy code CREATE TABLE feedback ( feedback_id INT PRIMARY KEY, user_id INT, thread_id INT NULL, -- Link feedback to a specific thread (optional) content TEXT, rating INT, -- You can store a rating score (e.g., 1 to 5) created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, FOREIGN KEY (user_id) REFERENCES users(user_id) ); Types of Feedback:
Rating System: Allow users to rate threads, posts, or specific forum features. Text Feedback: Let users provide qualitative feedback, such as suggestions, issues, or complaints. Survey Forms: You can embed surveys within threads or on forum pages to collect structured feedback. Displaying Feedback:
Show user ratings or feedback on threads or posts to help future visitors gauge the quality or relevance of the content. Implement a feedback summary (average ratings or most common feedback topics) on each thread page. Feedback Response and Action:
Allow admins or moderators to respond to feedback within the forum, letting users know that their input is valued and being considered. Analyze feedback trends over time, and periodically update the community on changes or improvements made based on their input. Automating Feedback Analysis:
For large forums, you can implement automated feedback analysis using sentiment analysis tools or simply aggregate feedback data into dashboards that track user satisfaction, feature requests, and common issues. By combining these database practices with a customer feedback management system, you can not only maintain an efficient, scalable forum platform but also continuously improve it by acting on user insights.
Try if just nvim-lspconfig works fine for haskell on your machine.
It should be sufficient in most of the cases.
SQLDelight is a widely used KMP SQL ORM that supports platforms such as JVM, Android, iOS, and JS. However, native support for WASM is currently unavailable, though it may be added in the future.
Add <DBIncludeFamily>Yes</DBIncludeFamily> inside <STATICVARIABLES>
you can check that in the this.viewer.layers whenever you are drawing new shapes it will be added to here only.
Class DirBody(): def init(self, name, depth): self.name = name self.depth = depth self.childDir = [] self.childFile = [] def str(self) -> str: return self.printDirs() + "\n"+self.printFiles() def printDirs(self) -> str: pass def printFiles(self) -> str: pass
I think this is because by default spring doesn't carry SecurityContext holder to new thread that will when you called @Async method where Security Context is not avilable to carry SecurityContext to new threads then you have to do this:
@Bean
public InitializingBean initializingBean() {
return () -> SecurityContextHolder.setStrategyName(
SecurityContextHolder.MODE_INHERITABLETHREADLOCAL);
}
This has nothing to do with webflux or servlet. The problem is, that the ObjectMapper config is changed and therefore the output of the actuator endpoints also changed.
In Spring Boot 3 this very unlikely to happen as they introduced a separate ObjectMapper for actuator.
In Spring Boot 2 you have to make sure you do not change the default ObjectMapper, but use a separate one for your business code. Or adjust dependending on some condition, like mime type.
The important thing is that we are talking about the ObjectMapper in your service, not in Spring Boot Admin server. The backend server is just passing through the data it receives.
See also the discussion in the corresponding issue in github: https://github.com/codecentric/spring-boot-admin/issues/3830
It does work on a shared project. The project structure will be similer to xamarin forms structure. just install the firebase messaging and also add google play service on the android side
NextJs stay ever frontend even if some guys abuse. The right fullstack is AdonisJs
rgvrdgvrtefdgverfb hrtfgvb rtfhbgtfbrrgvrdgvrtefdgverfb hrtfgvb rtfhbgtfbrtbrgvrdgvrtefdgverfb hrtfgvb rtfhbgtfbrtbrgvrdgvrtefdgverfb hrtfgvb rtfhbgtfbrtbrgvrdgvrtefdgverfb hrtfgvb rtfhbgtfbrtbrgvrdgvrtefdgverfb hrtfgvb rtfhbgtfbrtbrgvrdgvrtefdgverfb hrtfgvb rtfhbgtfbrtbtb
When we use Azure DevOps server 2019, we can directly see the folder icon in the Build tab.
For example:
In Azure DevOps Server 2022, the Pipeline Folder view has been moved to the Pipelines -> All tab.
You can check if you can see the Pipeline folder view in the Pipelines -> All tab.
This is called occlusion and with ARKit 3.5 it's available.
You can achieve the results with the People Occlusion and Object Occlusion in ARKit 3.5 and above.
sorry about the delay. Did you find the solution?
a) is easy, you just take the last record Add this into your form:
function on_before_post(item) {
if (item.is_new()) {
let copy = item.copy();
copy.open();
if (copy.rec_count ===0) {
item.tach_in.value = 1;
} else {
item.tach_in.value = item.tach_out.value;
}
}
}
This can be seen on https://msaccess.pythonanywhere.com/ if for example you clone the record it will increase last record by 1.
b) not sure that I follow. Summary is created automatically, like on above app. If Flight duration is needed on the Form (like on the image), than just calculate it with JS. Ie:
function on_edit_form_shown(item) {
if (item.tach_out.value) {
item.flight_total.value = item.tach_out.value - item.tach_in.value;
}
}
If duration is needed on the View grid, than use similar approach as msaccess one for Actual Amount.
Hope this helps.
first Download the Installer, then install SQL server express(including loacaldb), after that check the localdb installation then connect to localdb using SQL server management studio (SSMS).
Fatal error: Uncaught TypeError: array_keys(): Argument #1 ($array) must be of type array, null given in /home/admin/web/learn.ptenote.com/public_html/dashboard/dashboard_pages_logic.php:5079 Stack trace: #0 /home/admin/web/learn.ptenote.com/public_html/dashboard/dashboard_pages_logic.php(5079): array_keys() #1 /home/admin/web/learn.ptenote.com/public_html/dashboard/index.php(1673): require_once('...') #2 {main} thrown in /home/admin/web/learn.ptenote.com/public_html/dashboard/dashboard_pages_logic.php on line 5079
It's because One is IIS (Windows) and the other is Kestrel (Windows).
Alternatively, you can turn to the Headless Platform and use the built-in mechanism:
Window.MouseMove(Point point, MouseButton button, RawInputModifiers modifiers)
Window.KeyPress(Key key, RawInputModifiers modifiers, PhysicalKey physicalKey, string? keySymbol)
Okay i got the answer. i changed the permission to the directory using chmod command
sudo chmod a+w /home/ec2-user/PythonProgram
Have you tried using RDS Performance Insights to debug what could be your bottleneck?
This post from AWS goes quite in depth on how to troubleshoot this kind of issue: https://aws.amazon.com/blogs/database/optimized-bulk-loading-in-amazon-rds-for-postgresql/
For instance increasing IOPS of the underlying EBS volume or adjusting the parameter group settings.
The error you're encountering is likely due to a version mismatch between the MongoDB.Driver library and other components in your project, such as Microsoft.EntityFrameworkCore.MongoDB or related MongoDB/Bson libraries. Specifically, the error suggests that the GuidRepresentationMode method is being called but isn't found, which indicates changes or deprecations in the MongoDB.Driver API.
Here’s how you can troubleshoot and resolve this issue:
CHECK MONGODB VERSION Ensure you are using a compatible version of the MongoDB.Driver. The GuidRepresentationMode property was introduced in MongoDB.Driver 2.7. If you're using an older version of the library, upgrade it to the latest stable version compatible with your project.
To check the installed version:
Open the NuGet Package Manager or check your csproj file. Look for the MongoDB.Driver package and its version
to update run this command in your bash terminal dotnet add package MongoDB.Driver --version <latest_version>
make sure you type the name in the field and don't use the pre-pop name!
I solved the problem on my own. On windows, there is already a Crypt32.dll. So vba directed to it…
How to enable welcome screen of Android studio
File -> Settings -> System Settings -> [Uncheck the checkbox "Reopen projects on startup."]
Please check the image link provided above.
I encountered the same issue while working with crimCV. Have you found a way to resolve this problem? Thank you very much for your time and assistance.
There seem to be several issues with the settings. To modify the root_squash option in an NFS (Network File System) server, you can update the configuration.
DIR : /etc/exports
root_squash = default
-> no_root_squash
Would you like to set the VSC terminal settings as follows?
"terminal.integrated.defaultProfile.linux": "bash"
"terminal.integrated.shell.linux": "/usr/bin/bash"
"python.pythonPath": "/usr/bin/python3"
my approach would be to set for every job the right 'if' condition.
As an example:
- name: Check approval status
if: matrix.target_env == '<env>'
continue-on-error: true
id: check_approval
run: |
echo "status=success" >> $GITHUB_OUTPUT
I encountered SSL certificate error while running my Python Flask project. The issue seems to be network-specific. Here’s the situation:
maybe I'm misunderstanding the question, but SSM has a concept of "Documents" where you can store your scripts and supports a "Run Command" which can be used to run the document against your "fleet" of machines.
It even supports rate controls and more advanced feature.
Link for the documentation can be found here: https://docs.aws.amazon.com/systems-manager/latest/userguide/send-commands-multiple.html
Problem was not enough memory in state array for thread/block size
curandState * d_state;
cudaMalloc(&d_state, 195075 * sizeof(curandState) );
__global__ void k_initRand(curandState *state, uint64_t seed){
int tid = threadIdx.x + blockIdx.x * blockDim.x;
curand_init(seed, tid, 0, &state[tid]);
}
Out of bounds error was garbling the printf() data
Any solution for this? Even my Angular version is 16
I also face this problem
I change php.ini file and max_input_vars is commented on php.ini file. so i uncomment that max_input_vars varable
Before do that impliment like this ;max_input_vars=1000
and i remove the ";" mark max_input_vars=1000
Is there a way to achieve this(setting the surface type) when you implement the ExoPlayer in jetpack compose? Looks like function to set surface type is private. Using reflection to access the private method is risky as ExoPlayer's API changes radpidly. I'm also stuck at this issue. Really appreciate any help.
I met this question also, you should check your project folder, is it a soft link? I solve this question by change my project path.
restart pc worked for me. note: window 11, vscode.
Looking at the docstring of the hasHandler method suggests it searches up the logger's parents until a handler is found or it reaches top level. If you wish hasHandlers to only reflects presense of handlers at your logger's level, then setting logger.propagte = False should suffice.
For me, I wanted to disable the analyzer for EF Core migration .cs files. Somehow using dotnet_analyzer_diagnostic.severity = none didn't work. I had to use:
[Migrations/**]
generated_code = true
This is documented here.
Check your X-CopilotCloud-Public-API-Key. The error seems to be there.
OPENAI_API_KEY will not be the same as. Restart your server after changing it. If it doesn't work, check if you can get correct results from api endpoints with postman. Or via browser. I hope your problem is solved that.
The iPhone 11 has a viewport width of 414px so try @media (max-width: 415px)
I also think your selectors are mismatched:
try this simplified selector :
@media (max-width: 415px) {
#slider-9-slide-11-layer-1.rs-layer {
font-size: 20px !important;
margin-top: 100px !important;
}
}
Also check if you are using the correct css file
I found the problem. When SSL handshaking occurs, the Kafka broker performs a reverse DNS lookup on the client's IP address. The timeout occurs during this process, so we must configure the client's IP and hostname in the Kafka broker's hosts file to restore normal operation.
createVisual is a Power BI Report Authoring API. Make sure you Install the powerbi-report-authoring-npm package.
I don't know what beautiful language to use to admire the stupidity of XCode!
The problem is in DEBUG mode MAUI has "Internet" and "Write external storage" permissions by default and those are missing in Release mode and has to be included explicitly when changing from debug to Release mode.
This resolved the problem.
You can install Rosetta2 by running the following command.
sudo softwareupdate --install-rosetta
Through trial and error I've found that this works:
gr.Blocks(css=".progress-text { display: none !important; })"
prefs = {
"profile.default_content_setting_values.media_stream_mic": 1,
"media.default_audio_capture_device": "Device ID"
}
Device ID can be found on Chrome setting using Dev tool. In my case, the microphone A is
{0.0.1.00000000}.{6c057a49-4423-4c97-8806-f51e62014e85} Write the code like this:
prefs = {
"profile.default_content_setting_values.media_stream_mic": 1,
"media.default_audio_capture_device": "{0.0.1.00000000}.{6c057a49-4423-4c97-8806-f51e62014e85}"
}
Use the below code to get the date value
date_tag = container.find("div", class_="_1O8E5N17").text date_text,date_value = str.split(date_tag,'
MicroStrategy Library doesn't support IIS. You need tomcat 10 to run new releases. And I do not recommend use IIS even for MicroStrategy Web.
In My case, There is issue in importing module in the specific file where we are using the method find(). After making that import right, code runs well .
How are these files structured? You mentioned the first one is a server component where you can retrieve the cookies and pass it to the other file, but where is the api.ts and what is it? Isn’t it a server component then?
change the "Database host" to docker container network IP example:172.18.0.2
is the .lic file placed in both the locations shown in the error? That should help get past this.
you can create a new table and sort the category column by sort column
do not create relationship between two tables
then create a measure
MEASURE =
SWITCH (
TRUE (),
SELECTEDVALUE ( 'Table (2)'[category] ) = "A", CALCULATE ( SUM ( 'Table'[value] ), 'Table'[category] = "A" ),
SELECTEDVALUE ( 'Table (2)'[category] ) = "B", CALCULATE ( SUM ( 'Table'[value] ), 'Table'[category] = "B" ),
CALCULATE ( SUM ( 'Table'[value] ) )
)
erify Your Credentials: Double-check the URL, username, and token you’ve added to the extension. Even a tiny mistake, like a missing character or extra space, can cause issues.
Compatibility with Bagisto: Since you’re using version 0.1.6, it’s worth confirming that the plugin is fully compatible with that version. Older versions of Bagisto might have some limitations when it comes to newer plugins.
Browser Extensions Conflict: If you have other extensions installed, like ad blockers or anything similar, try disabling them temporarily. They can sometimes interfere with how the upload icon appears.
Look for Errors: Open your browser’s developer tools (usually by pressing F12) and check the console for any error messages when you’re on AliExpress. Those can give you a better idea of what’s going on.
If none of these steps work, it might be a good idea to contact the plugin developer or consider upgrading to a newer version of Bagisto, which could fix compatibility issues.
By the way, if you’re looking for more resources or tips on dropshipping, feel free to check out my website, PB Dropshipping. I’d be happy to help you out!
dbutils.fs.rm("dbfs:/FileStore/tables/Staging/Customer/_delta_log/", recurse=True)
I also tried this problem on codechef. Seems like there is an issue on codechef's end. To verify this I copied the solution code into the editor and compiled it. Their solution code produced same results as mine. So I submited the solution code an it also got the test cases wrong.
I'd recommend you to skip this question and move on.
Found the issue. Changed em.isJoinedToTransaction() to em.getTransaction().isActive() and now it is working fine.
if(em.getTransaction().isActive()){
em.getTransaction().commit();
}